SlideShare a Scribd company logo
1 of 1
Download to read offline
J. Flix (jflix@pic.es), A. Pérez-Calero (aperez@pic.es), E. Acción, V. Acin, C. Acosta, G. Bernabeu, A. Bria, J. Casals, M. Caubet, R. Cruz, M. Delfino,
X. Espinal, E. Lanciotti, F. López, F. Martinez, V. Méndez, G. Merino, E. Planas, M.C. Porto, B. Rodríguez, and A. Sedov
The LHC Tier1 at PICexperience from first LHC run
The Large Hadron Collider (LHC), in the European Laboratory for Particle Physics (CERN, Switzerland),
started operating in November 2009 and it has generated around 200 Petabytes of raw, simulated and
processed data, from all of its detectors, since the stop of the successful first run in February 2013. The
largest scientific distributed computing infrastructure in the world manages the data: the Worldwide LHC
Computing Grid (WLCG), adding up the computing resources of more than 170 centers in 34 countries. In
the WLCG, the computing centers are functionally classified in Tiers. Eleven of these centers are the so-
called Tier1s. They receive a copy of the raw data in real time and are in charge of massive data
processing, storage and distribution. Spain contributes to the WLCG with one Tier1 centre: Port
d’Informació Científica (PIC), located in the campus of the Universitat Autònoma de Barcelona, near the
city of Barcelona. PIC provides services to three of the LHC experiments, ATLAS, CMS and LHCb,
accounting for 5% of the total Tier1 resources, acting also as the reference Tier1 for the Tier2 centers in
Spain and Portugal.
Abstract	
CPU service
~4000 CPUs
managed by Torque/Maui
http://www.clusterresources.com
PIC: A high capacity service	
Disk service
~5.5 PBs
managed by dCache
http://www.dcache.org
Tape service
~8 PBs
managed by Enstore
http://http://www-ccf.fnal.gov/enstore
Being closely connected to the detectors data acquisition, Tier1 services need to be extremely reliable.
A powerful monitoring framework, constantly probing the sites Grid Services, ensures peer pressure
and guarantees that the reliability of WLCG service will keep improving.
A reliable service	
In 2015, LHC experiments will restart data taking at increased collision energy and trigger rates. LHC computing needs to prepare for 2x data with flat budget:
•  Common Tools: Advance towards generic tools used by the LHC VOs: job submission, network and storage monitoring, etc.
•  Storage federations: Integrate data transfers and storage resources (e.g. CMS AAA project) benefiting from increased network rates (LHCONE) and including new transfer protocols, e.g. xrootd/http.
•  Decouple where data is and where jobs run: Once storage federations are setup, jobs can run at grid site A with remote data input from site B and output at site C.
•  Cloud Computing and Opportunistic resources: Integrate Grid infrastructure providing baseline resources with cloud resources on demand to absorb peaks: commercial clouds, HLT farms or even SCC sites.
•  Parallel Computing and multi-core jobs: Increased luminosity and pileup require processing events with improved memory management: multi-thread applications running on multi-core CPUs.
Getting prepared for the LHC restart	
Acknowledgements
This work was partially supported and makes use of results produced by the project ”Implantación del Sistema de Computación Tier1 Español
para el Large Hadron Collider Fase III” funded by the Ministry of Science and Innovation of Spain under reference FPA2010-21816-C02-00.
Running jobs at PIC	
Data transfers	
Tape Storage service	
Tier1s provide the experiments with mass storage on tape for custodial replicas of raw and processed
data, as well as MC samples. Read and write average rates have increased along the years as the
amount of data increases and new tape technologies become available. Current technology at PIC
includes T10KC tape cartridges with a capacity of 5 TB each, and StorageTek and IBM tape libraries.
Total rate has achieved hourly average rates peaking at 1 GB/s.
WLCG involves massive data transfers between Grid sites. Good performance links and reliable data
transfer systems are a must. Main components at PIC as a Tier1 are RAW data transfers from CERN,
distribution of reduced data to Tier2s, and upload of MC samples produced at Tier2s. During Run1 the
monthly averaged value for incoming (outgoing) transfers to PIC has been of about 250 (400) MB/s, with
hourly peaks exceeding 2 GB/s.Tier0 and Tier1s are connected through a private 10 Gbps network
(LHCOPN). Tier1s are currently connected to Tier2s via NRENs. All sites will soon be connected by a
cutting edge dedicated network (LHCONE).
PIC responsibilities as a Tier1 include running data processing jobs. MonteCarlo (MC) generation and
processing are also among its tasks, along with a small proportion of analysis jobs, normally run at Tier2s.
Millions of jobs run annually at PIC, with main customers being ATLAS, CMS and LHCb, but also covering
other experiments needs. The LHC jobs CPU efficiency has increased along the years, being around 90%
since the LHC start.
LHCP 2013
Barcelona, Spain, May 13-18th, 2013

More Related Content

What's hot

Bioclouds CAMDA (Robert Grossman) 09-v9p
Bioclouds CAMDA (Robert Grossman) 09-v9pBioclouds CAMDA (Robert Grossman) 09-v9p
Bioclouds CAMDA (Robert Grossman) 09-v9pRobert Grossman
 
Health & Status Monitoring (2010-v8)
Health & Status Monitoring (2010-v8)Health & Status Monitoring (2010-v8)
Health & Status Monitoring (2010-v8)Robert Grossman
 
Solving Network Throughput Problems at the Diamond Light Source
Solving Network Throughput Problems at the Diamond Light SourceSolving Network Throughput Problems at the Diamond Light Source
Solving Network Throughput Problems at the Diamond Light SourceJisc
 
Open Science Data Cloud - CCA 11
Open Science Data Cloud - CCA 11Open Science Data Cloud - CCA 11
Open Science Data Cloud - CCA 11Robert Grossman
 
Pic archiver stansted
Pic archiver stanstedPic archiver stansted
Pic archiver stanstedArchiver
 
Open Science Data Cloud (IEEE Cloud 2011)
Open Science Data Cloud (IEEE Cloud 2011)Open Science Data Cloud (IEEE Cloud 2011)
Open Science Data Cloud (IEEE Cloud 2011)Robert Grossman
 
Stansted slides-desy
Stansted slides-desyStansted slides-desy
Stansted slides-desyArchiver
 
Using parallel hierarchical clustering to
Using parallel hierarchical clustering toUsing parallel hierarchical clustering to
Using parallel hierarchical clustering toBiniam Behailu
 
829 tdwg-2015-nicolson-kew-strings-to-things
829 tdwg-2015-nicolson-kew-strings-to-things829 tdwg-2015-nicolson-kew-strings-to-things
829 tdwg-2015-nicolson-kew-strings-to-thingsnickyn
 
Coding the Continuum
Coding the ContinuumCoding the Continuum
Coding the ContinuumIan Foster
 
GlobusWorld 2021: Arecibo Observatory Data Movement
GlobusWorld 2021: Arecibo Observatory Data MovementGlobusWorld 2021: Arecibo Observatory Data Movement
GlobusWorld 2021: Arecibo Observatory Data MovementGlobus
 
perfSONAR: getting telemetry on your network
perfSONAR: getting telemetry on your networkperfSONAR: getting telemetry on your network
perfSONAR: getting telemetry on your networkJisc
 
OpenTopography - Scalable Services for Geosciences Data
OpenTopography - Scalable Services for Geosciences DataOpenTopography - Scalable Services for Geosciences Data
OpenTopography - Scalable Services for Geosciences DataOpenTopography Facility
 
Earth Science Platform
Earth Science PlatformEarth Science Platform
Earth Science PlatformTed Habermann
 
Towards an Incremental Schema-level Index for Distributed Linked Open Data G...
Towards an Incremental Schema-level Index  for Distributed Linked Open Data G...Towards an Incremental Schema-level Index  for Distributed Linked Open Data G...
Towards an Incremental Schema-level Index for Distributed Linked Open Data G...Till Blume
 
Dynamic Data Center concept
Dynamic Data Center concept  Dynamic Data Center concept
Dynamic Data Center concept Miha Ahronovitz
 

What's hot (20)

Bioclouds CAMDA (Robert Grossman) 09-v9p
Bioclouds CAMDA (Robert Grossman) 09-v9pBioclouds CAMDA (Robert Grossman) 09-v9p
Bioclouds CAMDA (Robert Grossman) 09-v9p
 
Health & Status Monitoring (2010-v8)
Health & Status Monitoring (2010-v8)Health & Status Monitoring (2010-v8)
Health & Status Monitoring (2010-v8)
 
Solving Network Throughput Problems at the Diamond Light Source
Solving Network Throughput Problems at the Diamond Light SourceSolving Network Throughput Problems at the Diamond Light Source
Solving Network Throughput Problems at the Diamond Light Source
 
Open Science Data Cloud - CCA 11
Open Science Data Cloud - CCA 11Open Science Data Cloud - CCA 11
Open Science Data Cloud - CCA 11
 
Pic archiver stansted
Pic archiver stanstedPic archiver stansted
Pic archiver stansted
 
Open Science Data Cloud (IEEE Cloud 2011)
Open Science Data Cloud (IEEE Cloud 2011)Open Science Data Cloud (IEEE Cloud 2011)
Open Science Data Cloud (IEEE Cloud 2011)
 
Stansted slides-desy
Stansted slides-desyStansted slides-desy
Stansted slides-desy
 
2019 swan-cs3
2019 swan-cs32019 swan-cs3
2019 swan-cs3
 
Using parallel hierarchical clustering to
Using parallel hierarchical clustering toUsing parallel hierarchical clustering to
Using parallel hierarchical clustering to
 
CCI DAY PRESENTATION
CCI DAY PRESENTATIONCCI DAY PRESENTATION
CCI DAY PRESENTATION
 
829 tdwg-2015-nicolson-kew-strings-to-things
829 tdwg-2015-nicolson-kew-strings-to-things829 tdwg-2015-nicolson-kew-strings-to-things
829 tdwg-2015-nicolson-kew-strings-to-things
 
Icbai 2018 ver_1
Icbai 2018 ver_1Icbai 2018 ver_1
Icbai 2018 ver_1
 
Coding the Continuum
Coding the ContinuumCoding the Continuum
Coding the Continuum
 
GlobusWorld 2021: Arecibo Observatory Data Movement
GlobusWorld 2021: Arecibo Observatory Data MovementGlobusWorld 2021: Arecibo Observatory Data Movement
GlobusWorld 2021: Arecibo Observatory Data Movement
 
perfSONAR: getting telemetry on your network
perfSONAR: getting telemetry on your networkperfSONAR: getting telemetry on your network
perfSONAR: getting telemetry on your network
 
OpenTopography - Scalable Services for Geosciences Data
OpenTopography - Scalable Services for Geosciences DataOpenTopography - Scalable Services for Geosciences Data
OpenTopography - Scalable Services for Geosciences Data
 
Earth Science Platform
Earth Science PlatformEarth Science Platform
Earth Science Platform
 
LHCb on RHEA and T-Systems
LHCb on RHEA and T-SystemsLHCb on RHEA and T-Systems
LHCb on RHEA and T-Systems
 
Towards an Incremental Schema-level Index for Distributed Linked Open Data G...
Towards an Incremental Schema-level Index  for Distributed Linked Open Data G...Towards an Incremental Schema-level Index  for Distributed Linked Open Data G...
Towards an Incremental Schema-level Index for Distributed Linked Open Data G...
 
Dynamic Data Center concept
Dynamic Data Center concept  Dynamic Data Center concept
Dynamic Data Center concept
 

Viewers also liked

La Historia de Sofía Maroni
La Historia de Sofía MaroniLa Historia de Sofía Maroni
La Historia de Sofía MaroniYonathan Ortiz
 
Information Architecture class7 02 20
Information Architecture class7 02 20Information Architecture class7 02 20
Information Architecture class7 02 20Marti Gukeisen
 
Proposal - Final Draft
Proposal - Final DraftProposal - Final Draft
Proposal - Final DraftManuel Larach
 
Information Architecture class10 03 20
Information Architecture class10 03 20Information Architecture class10 03 20
Information Architecture class10 03 20Marti Gukeisen
 
Information Architecture class8 02 27
Information Architecture class8 02 27Information Architecture class8 02 27
Information Architecture class8 02 27Marti Gukeisen
 
La guerra fría
La guerra fríaLa guerra fría
La guerra fríaNay Rdz
 
Ignacio gonzalez presenta la programacion del Círculo de Bellas Artes
Ignacio gonzalez presenta la programacion del Círculo de Bellas ArtesIgnacio gonzalez presenta la programacion del Círculo de Bellas Artes
Ignacio gonzalez presenta la programacion del Círculo de Bellas ArtesIgnacio González González
 
Presentación power point joel erinel rosa
Presentación power point joel erinel rosaPresentación power point joel erinel rosa
Presentación power point joel erinel rosajoel erinel
 
Lóbulos cerebrales presentación para slide share
Lóbulos cerebrales presentación para slide shareLóbulos cerebrales presentación para slide share
Lóbulos cerebrales presentación para slide shareyurlenyc
 
南美經典Abcp總覽篇 --王英明攝
南美經典Abcp總覽篇 --王英明攝南美經典Abcp總覽篇 --王英明攝
南美經典Abcp總覽篇 --王英明攝Ct Wu
 
Information Architecture class9 03 13
Information Architecture class9 03 13Information Architecture class9 03 13
Information Architecture class9 03 13Marti Gukeisen
 

Viewers also liked (16)

La Historia de Sofía Maroni
La Historia de Sofía MaroniLa Historia de Sofía Maroni
La Historia de Sofía Maroni
 
Information Architecture class7 02 20
Information Architecture class7 02 20Information Architecture class7 02 20
Information Architecture class7 02 20
 
Tics
TicsTics
Tics
 
Proposal - Final Draft
Proposal - Final DraftProposal - Final Draft
Proposal - Final Draft
 
Information Architecture class10 03 20
Information Architecture class10 03 20Information Architecture class10 03 20
Information Architecture class10 03 20
 
Os loucos anos 20
Os loucos anos 20Os loucos anos 20
Os loucos anos 20
 
Information Architecture class8 02 27
Information Architecture class8 02 27Information Architecture class8 02 27
Information Architecture class8 02 27
 
La guerra fría
La guerra fríaLa guerra fría
La guerra fría
 
Sky tilastot20130515c
Sky tilastot20130515cSky tilastot20130515c
Sky tilastot20130515c
 
Vinpearl Premium Đà Nẵng
Vinpearl Premium Đà NẵngVinpearl Premium Đà Nẵng
Vinpearl Premium Đà Nẵng
 
Iceland
IcelandIceland
Iceland
 
Ignacio gonzalez presenta la programacion del Círculo de Bellas Artes
Ignacio gonzalez presenta la programacion del Círculo de Bellas ArtesIgnacio gonzalez presenta la programacion del Círculo de Bellas Artes
Ignacio gonzalez presenta la programacion del Círculo de Bellas Artes
 
Presentación power point joel erinel rosa
Presentación power point joel erinel rosaPresentación power point joel erinel rosa
Presentación power point joel erinel rosa
 
Lóbulos cerebrales presentación para slide share
Lóbulos cerebrales presentación para slide shareLóbulos cerebrales presentación para slide share
Lóbulos cerebrales presentación para slide share
 
南美經典Abcp總覽篇 --王英明攝
南美經典Abcp總覽篇 --王英明攝南美經典Abcp總覽篇 --王英明攝
南美經典Abcp總覽篇 --王英明攝
 
Information Architecture class9 03 13
Information Architecture class9 03 13Information Architecture class9 03 13
Information Architecture class9 03 13
 

Similar to PIC Tier-1 (LHCP Conference / Barcelona)

The Open Science Data Cloud: Empowering the Long Tail of Science
The Open Science Data Cloud: Empowering the Long Tail of ScienceThe Open Science Data Cloud: Empowering the Long Tail of Science
The Open Science Data Cloud: Empowering the Long Tail of ScienceRobert Grossman
 
SummerStudentReport-HamzaZafar
SummerStudentReport-HamzaZafarSummerStudentReport-HamzaZafar
SummerStudentReport-HamzaZafarHamza Zafar
 
Big Data for Big Discoveries
Big Data for Big DiscoveriesBig Data for Big Discoveries
Big Data for Big DiscoveriesGovnet Events
 
Using the Open Science Data Cloud for Data Science Research
Using the Open Science Data Cloud for Data Science ResearchUsing the Open Science Data Cloud for Data Science Research
Using the Open Science Data Cloud for Data Science ResearchRobert Grossman
 
The Transformation of Systems Biology Into A Large Data Science
The Transformation of Systems Biology Into A Large Data ScienceThe Transformation of Systems Biology Into A Large Data Science
The Transformation of Systems Biology Into A Large Data ScienceRobert Grossman
 
What Are Science Clouds?
What Are Science Clouds?What Are Science Clouds?
What Are Science Clouds?Robert Grossman
 
OpenPOWER Academia and Research team's webinar - Presentations from Oak Ridg...
OpenPOWER Academia and Research team's webinar  - Presentations from Oak Ridg...OpenPOWER Academia and Research team's webinar  - Presentations from Oak Ridg...
OpenPOWER Academia and Research team's webinar - Presentations from Oak Ridg...Ganesan Narayanasamy
 
The Pacific Research Platform: a Science-Driven Big-Data Freeway System
The Pacific Research Platform: a Science-Driven Big-Data Freeway SystemThe Pacific Research Platform: a Science-Driven Big-Data Freeway System
The Pacific Research Platform: a Science-Driven Big-Data Freeway SystemLarry Smarr
 
Big Fast Data in High-Energy Particle Physics
Big Fast Data in High-Energy Particle PhysicsBig Fast Data in High-Energy Particle Physics
Big Fast Data in High-Energy Particle PhysicsAndrew Lowe
 
DWDM-RAM: An Architecture for Data Intensive Service Enabled by Next Generati...
DWDM-RAM: An Architecture for Data Intensive Service Enabled by Next Generati...DWDM-RAM: An Architecture for Data Intensive Service Enabled by Next Generati...
DWDM-RAM: An Architecture for Data Intensive Service Enabled by Next Generati...Tal Lavian Ph.D.
 
Grid optical network service architecture for data intensive applications
Grid optical network service architecture for data intensive applicationsGrid optical network service architecture for data intensive applications
Grid optical network service architecture for data intensive applicationsTal Lavian Ph.D.
 
Open Cloud Consortium Overview (01-10-10 V6)
Open Cloud Consortium Overview (01-10-10 V6)Open Cloud Consortium Overview (01-10-10 V6)
Open Cloud Consortium Overview (01-10-10 V6)Robert Grossman
 
Data Automation at Light Sources
Data Automation at Light SourcesData Automation at Light Sources
Data Automation at Light SourcesIan Foster
 
Hpc, grid and cloud computing - the past, present, and future challenge
Hpc, grid and cloud computing - the past, present, and future challengeHpc, grid and cloud computing - the past, present, and future challenge
Hpc, grid and cloud computing - the past, present, and future challengeJason Shih
 
Science and Cyberinfrastructure in the Data-Dominated Era
Science and Cyberinfrastructure in the Data-Dominated EraScience and Cyberinfrastructure in the Data-Dominated Era
Science and Cyberinfrastructure in the Data-Dominated EraLarry Smarr
 
Open Cloud Consortium: An Update (04-23-10, v9)
Open Cloud Consortium: An Update (04-23-10, v9)Open Cloud Consortium: An Update (04-23-10, v9)
Open Cloud Consortium: An Update (04-23-10, v9)Robert Grossman
 

Similar to PIC Tier-1 (LHCP Conference / Barcelona) (20)

Grid computing & its applications
Grid computing & its applicationsGrid computing & its applications
Grid computing & its applications
 
The Open Science Data Cloud: Empowering the Long Tail of Science
The Open Science Data Cloud: Empowering the Long Tail of ScienceThe Open Science Data Cloud: Empowering the Long Tail of Science
The Open Science Data Cloud: Empowering the Long Tail of Science
 
Jorge gomes
Jorge gomesJorge gomes
Jorge gomes
 
Jorge gomes
Jorge gomesJorge gomes
Jorge gomes
 
Jorge gomes
Jorge gomesJorge gomes
Jorge gomes
 
SummerStudentReport-HamzaZafar
SummerStudentReport-HamzaZafarSummerStudentReport-HamzaZafar
SummerStudentReport-HamzaZafar
 
Big Data for Big Discoveries
Big Data for Big DiscoveriesBig Data for Big Discoveries
Big Data for Big Discoveries
 
Using the Open Science Data Cloud for Data Science Research
Using the Open Science Data Cloud for Data Science ResearchUsing the Open Science Data Cloud for Data Science Research
Using the Open Science Data Cloud for Data Science Research
 
The Transformation of Systems Biology Into A Large Data Science
The Transformation of Systems Biology Into A Large Data ScienceThe Transformation of Systems Biology Into A Large Data Science
The Transformation of Systems Biology Into A Large Data Science
 
What Are Science Clouds?
What Are Science Clouds?What Are Science Clouds?
What Are Science Clouds?
 
OpenPOWER Academia and Research team's webinar - Presentations from Oak Ridg...
OpenPOWER Academia and Research team's webinar  - Presentations from Oak Ridg...OpenPOWER Academia and Research team's webinar  - Presentations from Oak Ridg...
OpenPOWER Academia and Research team's webinar - Presentations from Oak Ridg...
 
The Pacific Research Platform: a Science-Driven Big-Data Freeway System
The Pacific Research Platform: a Science-Driven Big-Data Freeway SystemThe Pacific Research Platform: a Science-Driven Big-Data Freeway System
The Pacific Research Platform: a Science-Driven Big-Data Freeway System
 
Big Fast Data in High-Energy Particle Physics
Big Fast Data in High-Energy Particle PhysicsBig Fast Data in High-Energy Particle Physics
Big Fast Data in High-Energy Particle Physics
 
DWDM-RAM: An Architecture for Data Intensive Service Enabled by Next Generati...
DWDM-RAM: An Architecture for Data Intensive Service Enabled by Next Generati...DWDM-RAM: An Architecture for Data Intensive Service Enabled by Next Generati...
DWDM-RAM: An Architecture for Data Intensive Service Enabled by Next Generati...
 
Grid optical network service architecture for data intensive applications
Grid optical network service architecture for data intensive applicationsGrid optical network service architecture for data intensive applications
Grid optical network service architecture for data intensive applications
 
Open Cloud Consortium Overview (01-10-10 V6)
Open Cloud Consortium Overview (01-10-10 V6)Open Cloud Consortium Overview (01-10-10 V6)
Open Cloud Consortium Overview (01-10-10 V6)
 
Data Automation at Light Sources
Data Automation at Light SourcesData Automation at Light Sources
Data Automation at Light Sources
 
Hpc, grid and cloud computing - the past, present, and future challenge
Hpc, grid and cloud computing - the past, present, and future challengeHpc, grid and cloud computing - the past, present, and future challenge
Hpc, grid and cloud computing - the past, present, and future challenge
 
Science and Cyberinfrastructure in the Data-Dominated Era
Science and Cyberinfrastructure in the Data-Dominated EraScience and Cyberinfrastructure in the Data-Dominated Era
Science and Cyberinfrastructure in the Data-Dominated Era
 
Open Cloud Consortium: An Update (04-23-10, v9)
Open Cloud Consortium: An Update (04-23-10, v9)Open Cloud Consortium: An Update (04-23-10, v9)
Open Cloud Consortium: An Update (04-23-10, v9)
 

PIC Tier-1 (LHCP Conference / Barcelona)

  • 1. J. Flix (jflix@pic.es), A. Pérez-Calero (aperez@pic.es), E. Acción, V. Acin, C. Acosta, G. Bernabeu, A. Bria, J. Casals, M. Caubet, R. Cruz, M. Delfino, X. Espinal, E. Lanciotti, F. López, F. Martinez, V. Méndez, G. Merino, E. Planas, M.C. Porto, B. Rodríguez, and A. Sedov The LHC Tier1 at PICexperience from first LHC run The Large Hadron Collider (LHC), in the European Laboratory for Particle Physics (CERN, Switzerland), started operating in November 2009 and it has generated around 200 Petabytes of raw, simulated and processed data, from all of its detectors, since the stop of the successful first run in February 2013. The largest scientific distributed computing infrastructure in the world manages the data: the Worldwide LHC Computing Grid (WLCG), adding up the computing resources of more than 170 centers in 34 countries. In the WLCG, the computing centers are functionally classified in Tiers. Eleven of these centers are the so- called Tier1s. They receive a copy of the raw data in real time and are in charge of massive data processing, storage and distribution. Spain contributes to the WLCG with one Tier1 centre: Port d’Informació Científica (PIC), located in the campus of the Universitat Autònoma de Barcelona, near the city of Barcelona. PIC provides services to three of the LHC experiments, ATLAS, CMS and LHCb, accounting for 5% of the total Tier1 resources, acting also as the reference Tier1 for the Tier2 centers in Spain and Portugal. Abstract CPU service ~4000 CPUs managed by Torque/Maui http://www.clusterresources.com PIC: A high capacity service Disk service ~5.5 PBs managed by dCache http://www.dcache.org Tape service ~8 PBs managed by Enstore http://http://www-ccf.fnal.gov/enstore Being closely connected to the detectors data acquisition, Tier1 services need to be extremely reliable. A powerful monitoring framework, constantly probing the sites Grid Services, ensures peer pressure and guarantees that the reliability of WLCG service will keep improving. A reliable service In 2015, LHC experiments will restart data taking at increased collision energy and trigger rates. LHC computing needs to prepare for 2x data with flat budget: •  Common Tools: Advance towards generic tools used by the LHC VOs: job submission, network and storage monitoring, etc. •  Storage federations: Integrate data transfers and storage resources (e.g. CMS AAA project) benefiting from increased network rates (LHCONE) and including new transfer protocols, e.g. xrootd/http. •  Decouple where data is and where jobs run: Once storage federations are setup, jobs can run at grid site A with remote data input from site B and output at site C. •  Cloud Computing and Opportunistic resources: Integrate Grid infrastructure providing baseline resources with cloud resources on demand to absorb peaks: commercial clouds, HLT farms or even SCC sites. •  Parallel Computing and multi-core jobs: Increased luminosity and pileup require processing events with improved memory management: multi-thread applications running on multi-core CPUs. Getting prepared for the LHC restart Acknowledgements This work was partially supported and makes use of results produced by the project ”Implantación del Sistema de Computación Tier1 Español para el Large Hadron Collider Fase III” funded by the Ministry of Science and Innovation of Spain under reference FPA2010-21816-C02-00. Running jobs at PIC Data transfers Tape Storage service Tier1s provide the experiments with mass storage on tape for custodial replicas of raw and processed data, as well as MC samples. Read and write average rates have increased along the years as the amount of data increases and new tape technologies become available. Current technology at PIC includes T10KC tape cartridges with a capacity of 5 TB each, and StorageTek and IBM tape libraries. Total rate has achieved hourly average rates peaking at 1 GB/s. WLCG involves massive data transfers between Grid sites. Good performance links and reliable data transfer systems are a must. Main components at PIC as a Tier1 are RAW data transfers from CERN, distribution of reduced data to Tier2s, and upload of MC samples produced at Tier2s. During Run1 the monthly averaged value for incoming (outgoing) transfers to PIC has been of about 250 (400) MB/s, with hourly peaks exceeding 2 GB/s.Tier0 and Tier1s are connected through a private 10 Gbps network (LHCOPN). Tier1s are currently connected to Tier2s via NRENs. All sites will soon be connected by a cutting edge dedicated network (LHCONE). PIC responsibilities as a Tier1 include running data processing jobs. MonteCarlo (MC) generation and processing are also among its tasks, along with a small proportion of analysis jobs, normally run at Tier2s. Millions of jobs run annually at PIC, with main customers being ATLAS, CMS and LHCb, but also covering other experiments needs. The LHC jobs CPU efficiency has increased along the years, being around 90% since the LHC start. LHCP 2013 Barcelona, Spain, May 13-18th, 2013