FAQ

Q: Can one site have several LHCONE connections?

A: Yes

 

Q: Will bandwidth scheduling happen at the FTS (application) level?

A: Several options are available. Doing it at the application (gridftp) level would correspond to bandwidth on demand (OK if bandwidth is available at the time), whereas doing it at the PhEDEx level (scheduling and capacity management) would allow for scheduling in advance. We need to experiment with different approaches that will provide the maximum benefit whilst maintaining a manageable overall system. The dynamic circuit service is currently in pre-production stage. The DYNES project in the US will provide first production-level experience between US LHC sites using dynamic circuits starting in summer 2011. Later on, operational integration of DYNES with LHCONE is foreseen.

 

Q: How will we measure the success of LHCONE?

A: If the network continues to not be a problem! LHCONE is a solution to the problem of continuous growth in the demand for bandwidth. Today's status-quo cannot continue indefinitely.

 

Q: If the goal is to separate traffic, could we imagine separating VOs? or even having different classes?

A: Absolutely. One can set up groups of sites for sharing b/w amongst themselves. For static layer2 point-to-point connections, it's a question of procedures.

 

Q2: What about getting IP addresses? Will storage element have a CERN IP address?

A: The routers are on the periphery of LHCONE. A site's border router has an IP address with which it peers. Behind the router, addresses stay the same. The border router will announce the storage element to the other border routers (one would not want to announce all devices on campus).

 

Q: HEP might have to fund when funds might otherwise be provided by national infrastructures i.e. how to avoid HEP paying more than it needs to?

A: LHCONE is a collaborative effort between infrastructure providers of all types for R&E activities. It is why it does not prescibe any single solution. It must be possible to reach end users through whatever infrastructure and funding is available. It is why a common architecture is important not a prescribed common implementation. However, sites must budget networking as a resource similar to disk and CPU. It is clear that a balance muct be maintained. It would be not useful to have a lot of CPU and disk that is inaccessible.

 
Q: will LHCOPN & LHCONE merge at some stage?

A: Although there is no itention to do that today, both architectures will evolve. When a common approach that guarantees the QoS and operational simplicity we need becomes apparent then that will be the time to consider a common infrastructure.

Q: What can we do for sites that are geographically disadvantaged?

A: Openness will ensure that T2s can work with their best available partners to get the best solution.

Q: Do you plan to publish a timescale of deployment of sites?

A: This implies a centralised deployment plan. The community needs continued discussion with the experiments with the original list in the Bos/Fisk use case being the starting point. Some coordination activity will be organised in the coming months. From the initial deployment the growth will be "organic", and  will depend on the means and ambition of the sites combined with the priority of the experiment.

   
Q: One ATLAS problem is degradation of performance. Expected performance is not clear. How will support for these kinds of incidents evolve?

A: LHCONE permits the use of hybrid networking with dedicated channels in the core and even out to the end sites where needed and possible. This will make it simpler to track down problems. The support model will be modelled on current LHCOPN, but details will need to be worked out.

Submitted by Drupal Administrator on Mon, 04/04/2011 - 19:40