LHCONE use cases

LHC Use Case
15/03/2011 Final Download 3.0 Word Doc from Kors LHC Use Case


Geneva, March 8, 2011



A Use Case for the LHC Open Network Environment

Kors Bos (ATLAS), Ian Fisk (CMS)


This note describes how a first implementation of the LHCONE network could serve the LHC experiments early on and prove the validity of the role-out of the entire network encompassing all sites involved in LHC data analysis. The requirements for such a network for the LHC experiments was described in the Bos-Fisk document[1] of and the architecture was subsequently described and agreed at the Lyon OPN meeting[2]. It is expected that outcome of this prototype network could not only serve as a blueprint for a network for the LHC experiments but for data intensive sciences in general.


The need for this new network that augments the existing LHCOPN was realized after a year and half intense analysis of the LHC data. The use of the network was much higher than originally foreseen and the usage patterns stressed the R&E general purpose Internet to its limits. As it is expected that in the coming two years the analysis effort will grow more than linearly with the amount of data accumulated it has become urgent to act and be ahead to address potential problems with the network.


As of today the experiments would be served by improved network capacity between the T1s and the T2s and the T2s between each other independent of their physical location. T3s will become functionally the same as T2s and will not be mentioned separately from here on any more. As a matter of fact with the LHCONE the functional difference between T2s and T1s will diminish as well which follows very well the evolution of the experiments’ computing models: T1s served as primary (and archival) storage sites but the large amount of new data forces the experiments to also use more and more well managed T2 sites as such[w1] .


This is the model that was deployed from the beginning in the Nordic countries where no distinction was made between T1 and T2 sites and the infrastructure was often referred to as a distributed T1 where needed functionality was shared between all participating sites connected by a good network provided by Nordunet.


As this is an evolution from a model where the T1s played an important role the first implementation of LHCONE should focus on connecting the T1s and on adding T2 sites progressively. As this is also a prototype to serve as a proof of principle, it should connect T2s of as many different types and locations as reasonably possible given the amount of work and capital investment.


Below, ATLAS and CMS sites are listed from different countries and different continents. The network would immediately become valuable for the experiments with even very few sites from this list were connected and technical feedback would follow very quickly as all these sites are already heavily used for data analysis. On the other hand, as the exiting LHCOPN and T2 site connectivity will not change, the infrastructure that was used for analysis in 2009 and 2010 remains unchanged and can only improve by improved connectivity. This creates opportunity for tests without hampering the on-going analyses.


Candidate sites

 in Europe:


T1: SARA, Amsterdam, NL

T1: RAL, Didcot, UK

T1: FZK, Karlsruhe, Germany  

T1: CCIN2P3, Lyon, France

T1: CNAF, Bologna, Italy  

T1: PIC, Barcelona, Spain


T2: DESY, Hamburg, Germany

T2: RWTH, Aachen, Germany [3]

T2: IIHE, Brussel, Belgium

T2: Prague, Czech Republic

T2: Cracow, Poland

T2: Glasgow, UK

T2: IC, London, UK

T2: NIKHEF, Amsterdam, NL

T2: NDGF, Ørestad, Denmark

T2: GRIF, Orsay, France

T2: IFAE, Barcelona, Spain

T2: IFCA, Cantabria, Spain

T2: Pisa, Italia

T2: Legnaro, Italia

T2: RRC_KI, Moscow, Russia

T2: IL-TAU, Tel Aviv, Israel


in America


T1: BNL, Brookhaven (NY), USA

T1: FNAL, Batavia (IL), USA  

T1: Triumf, Vancouver, Canada

T2: Alberta, Canada


T2: AGLT2, Ann Arbor (MI), USA

T2: Chicago (IL), USA

T2: Purdue (IN), USA

T2: Madison (WI), USA

T3: SMU, Dallas (TX), USA

T3, ANL, Wisconsin (IL), USA

T3, Duke University, Durham (NC), USA


in Asia


T1: ASGC, Taipei, Taiwan


T2: Tokyo Univ., Tokyo, Japan

T2: TIFR, Mumbai, India


A metrics for success of the prototype LHCONE network would be that the current analysis effort is improved by a change in the connectivity. To make this more quantitative, “Sonar” and “Hammercloud” tests could be executed by the experiments to measure data throughput and analysis capacity. Close collaboration between the LHCOPN community and the experiments is therefore necessary.


It must be stressed that this is an initial list and the choice of sites has been driven by various considerations and by no means represents a preference or a ranking by the experiments. Any subset of the above sites makes a valid prototype to be tested by the experiments. I must also be clear that this is by far not the complete list of sites to be connected: as made clearly in the requirements document the infrastructure must be open and allow any site that has been validated for LHC analysis to join easily.




[3]sites in blue were proposed by CMS. The sites in black are ATLAS or ATLAS and CMS.

 [w1]Not clear what you mean here. Do you mean that the T1 computing resources will be user more and more for analysis as well as reconstruction? 

Submitted by Edoardo Martelli on Wed, 04/20/2016 - 17:07