You will learn how to create file archives, upload them to Amazon S3, and manage permissions and lifetimes, giving you the ability to back up any amount of data and to retain it for as long as you'd like. A number of open source and commercial backup and archiving tools will be demonstrated, as time permits.
You will also learn how to use built-in AWS facilities to quickly and easily create and restore snapshots of entire disk volumes.
This session will examine the many options the data scientist has for running Spark clusters in public and private clouds. We will discuss various environments employing AWS, Mesos, containers, docker, and BlueData EPIC technologies and the benefits and challenges of each.
Speakers:
Tom Phelan, Co-founder and Chief Architect - BlueData Inc. Tom has spent the last 25 years as a senior architect, developer, and team lead in the computer software industry in Silicon Valley. Prior to co-founding BlueData, Tom spent 10 years at VMware as a senior architect and team lead in the core R&D Storage and Availability group. Most recently, Tom led one of the key projects – vFlash, focusing on integration of server-based Flash into the vSphere core hypervisor. Prior to VMware, Tom was part of the early team at Silicon Graphics that developed XFS, one of the most successful open source file systems. Earlier in his career, he was a key member of the Stratus team that ported the Unix operating system to their highly available computing platform. Tom received his Computer Science degree from the University of California, Berkeley.
Amazon Relational Database Service (Amazon RDS) is a web service that makes it easy to set up, operate, and scale a relational database in the cloud. With Amazon RDS, you can MySQL in minutes with cost-efficient and re-sizable hardware capacity. In this webinar, we'll discuss how to get the most out of the service, including techniques for migrating data in and out.
You will learn how to create file archives, upload them to Amazon S3, and manage permissions and lifetimes, giving you the ability to back up any amount of data and to retain it for as long as you'd like. A number of open source and commercial backup and archiving tools will be demonstrated, as time permits.
You will also learn how to use built-in AWS facilities to quickly and easily create and restore snapshots of entire disk volumes.
This session will examine the many options the data scientist has for running Spark clusters in public and private clouds. We will discuss various environments employing AWS, Mesos, containers, docker, and BlueData EPIC technologies and the benefits and challenges of each.
Speakers:
Tom Phelan, Co-founder and Chief Architect - BlueData Inc. Tom has spent the last 25 years as a senior architect, developer, and team lead in the computer software industry in Silicon Valley. Prior to co-founding BlueData, Tom spent 10 years at VMware as a senior architect and team lead in the core R&D Storage and Availability group. Most recently, Tom led one of the key projects – vFlash, focusing on integration of server-based Flash into the vSphere core hypervisor. Prior to VMware, Tom was part of the early team at Silicon Graphics that developed XFS, one of the most successful open source file systems. Earlier in his career, he was a key member of the Stratus team that ported the Unix operating system to their highly available computing platform. Tom received his Computer Science degree from the University of California, Berkeley.
Amazon Relational Database Service (Amazon RDS) is a web service that makes it easy to set up, operate, and scale a relational database in the cloud. With Amazon RDS, you can MySQL in minutes with cost-efficient and re-sizable hardware capacity. In this webinar, we'll discuss how to get the most out of the service, including techniques for migrating data in and out.
[AWS Days Microsoft-LA 2015]: Best Practices for Backup and Recovery: Windows...Amazon Web Services
Backing up Windows workloads can be a challenge, and cumbersome for many companies. Backup and recovery for Windows workloads on AWS, however, can be easy. This presentation will cover best practices for backup and recovery, how to configure Windows workloads to back up to AWS; pitfalls to look out for; and recommended reference architectures.
Gem Session on scaling AEM (CQ5). Topics include:
High Volume and High Performance Delivery
High Frequency Input Feed
High Processing Input Feed
High Volume Input Feed
Many Editors
Geo-distributed Editors
Many DAM assets
Geo-distributed disaster recovery
HPC and cloud distributed computing, as a journeyPeter Clapham
Introducing an internal cloud brings new paradigms, tools and infrastructure management. When placed alongside traditional HPC the new opportunities are significant But getting to the new world with micro-services, autoscaling and autodialing is a journey that cannot be achieved in a single step.
AWS offers storage, networking, and data transfer services so you can build and deploy solutions to extend backup and archive targets to the AWS Cloud, increasing scalability, durability, security, and compliance.
Accumulo includes a remarkable breadth of testing frameworks, which helps to ensure its correctness, performance, robustness, and protection of your vital data. This presentation takes you on a tour from Accumulo's basic unit testing up through performance and scalability testing exercised on running clusters. Learn the extent to which Accumulo is put through its paces before it is released, and get ideas for how you can similarly enhance testing of your own code.
Find this talk and others at http://www.slideshare.net/AccumuloSummit.
This talk was given at the Boston OPenStack meetup to introduce Postgres Plus Cloud Database. This is a product that has built a convenient cloud infrastructure around PostgreSQL. If offers quick provision, autoscaling thresholds and both vertical and horizontal scaling abilities. This product was initially introduced on AWS but has recently been ported to OpenStack. We will talk about the issue faced in going between these two platforms and how one can maintain a truly cloud centric product that runs on multiple IaaS platforms.
Visit http:aws.amazon.com/hpc for more information about HPC on AWS.
High Performance Computing (HPC) allows scientists and engineers to solve complex science, engineering, and business problems using applications that require high bandwidth, low latency networking, and very high compute capabilities. AWS allows you to increase the speed of research by running high performance computing in the cloud and to reduce costs by providing Cluster Compute or Cluster GPU servers on-demand without large capital investments. You have access to a full-bisection, high bandwidth network for tightly-coupled, IO-intensive workloads, which enables you to scale out across thousands of cores for throughput-oriented applications.
Scientific Computing in the Cloud: Speeding Access for Drug DiscoveryAvere Systems
Scientific computing on the cloud lured scientists at H3 Biomedicine in Cambridge, Massachusetts, with the promise of near-limitless compute capacity potential of Amazon EC2. Today, scientists run a wide array of applications in the cloud that contribute to the integration of human cancer genomics with chemistry and biology to discover a library of specialty cancer treatment drugs.
In this webinar, you'll hear how this organization has built cloud infrastructure in a way that reduces latency and gives them storage flexibility, and does so in a way that helps them save money and support their business strategy. The H3 Biomedicine story will be supported by a look at the cloud technology and AWS services that have enabled application migration to the cloud in a hybrid IT environment.
Powerpoint file(incl. animations!): http://db.tt/oQiXb9lq
This is the slides of the presentation "Wordpress optimization" who presented at WordCamp 2013.
How to improve your wordpress performance and speed up your website more than 700% faster!
[AWS Days Microsoft-LA 2015]: Best Practices for Backup and Recovery: Windows...Amazon Web Services
Backing up Windows workloads can be a challenge, and cumbersome for many companies. Backup and recovery for Windows workloads on AWS, however, can be easy. This presentation will cover best practices for backup and recovery, how to configure Windows workloads to back up to AWS; pitfalls to look out for; and recommended reference architectures.
Gem Session on scaling AEM (CQ5). Topics include:
High Volume and High Performance Delivery
High Frequency Input Feed
High Processing Input Feed
High Volume Input Feed
Many Editors
Geo-distributed Editors
Many DAM assets
Geo-distributed disaster recovery
HPC and cloud distributed computing, as a journeyPeter Clapham
Introducing an internal cloud brings new paradigms, tools and infrastructure management. When placed alongside traditional HPC the new opportunities are significant But getting to the new world with micro-services, autoscaling and autodialing is a journey that cannot be achieved in a single step.
AWS offers storage, networking, and data transfer services so you can build and deploy solutions to extend backup and archive targets to the AWS Cloud, increasing scalability, durability, security, and compliance.
Accumulo includes a remarkable breadth of testing frameworks, which helps to ensure its correctness, performance, robustness, and protection of your vital data. This presentation takes you on a tour from Accumulo's basic unit testing up through performance and scalability testing exercised on running clusters. Learn the extent to which Accumulo is put through its paces before it is released, and get ideas for how you can similarly enhance testing of your own code.
Find this talk and others at http://www.slideshare.net/AccumuloSummit.
This talk was given at the Boston OPenStack meetup to introduce Postgres Plus Cloud Database. This is a product that has built a convenient cloud infrastructure around PostgreSQL. If offers quick provision, autoscaling thresholds and both vertical and horizontal scaling abilities. This product was initially introduced on AWS but has recently been ported to OpenStack. We will talk about the issue faced in going between these two platforms and how one can maintain a truly cloud centric product that runs on multiple IaaS platforms.
Visit http:aws.amazon.com/hpc for more information about HPC on AWS.
High Performance Computing (HPC) allows scientists and engineers to solve complex science, engineering, and business problems using applications that require high bandwidth, low latency networking, and very high compute capabilities. AWS allows you to increase the speed of research by running high performance computing in the cloud and to reduce costs by providing Cluster Compute or Cluster GPU servers on-demand without large capital investments. You have access to a full-bisection, high bandwidth network for tightly-coupled, IO-intensive workloads, which enables you to scale out across thousands of cores for throughput-oriented applications.
Scientific Computing in the Cloud: Speeding Access for Drug DiscoveryAvere Systems
Scientific computing on the cloud lured scientists at H3 Biomedicine in Cambridge, Massachusetts, with the promise of near-limitless compute capacity potential of Amazon EC2. Today, scientists run a wide array of applications in the cloud that contribute to the integration of human cancer genomics with chemistry and biology to discover a library of specialty cancer treatment drugs.
In this webinar, you'll hear how this organization has built cloud infrastructure in a way that reduces latency and gives them storage flexibility, and does so in a way that helps them save money and support their business strategy. The H3 Biomedicine story will be supported by a look at the cloud technology and AWS services that have enabled application migration to the cloud in a hybrid IT environment.
Powerpoint file(incl. animations!): http://db.tt/oQiXb9lq
This is the slides of the presentation "Wordpress optimization" who presented at WordCamp 2013.
How to improve your wordpress performance and speed up your website more than 700% faster!
La informacion mas completa de Mexico, Oaxaca y la Costa chica. Denuncian corrupción en venta de predios en Chacahua; área protegida en peligro. Con gas lacrimógeno, impide policía ingresar a Sección 22 al zócalo de Oaxaca.
Norman Bergrun, RIMGMAKERS OF SATURN, SATURN.
Norman Bergrun is a scientist/engineer who worked in an above top secret capacity (his level of clearance, way above the President) for Lockheed. Prior to that he was at NACA a precursor to NASA.
Upon leaving Lockheed, he wrote “Ringmakers of Saturn” about the enormous craft spotted in the rings of Saturn and became something of an outcast in the scientific community. This interview covers his views on time travel, the nature of the vehicles that he says are creating the rings and much more… His conclusion is that the Ringmakers of Saturn are now creating rings around other planets and they are on their way here….
Groundbreaking and a real wake-up call for the mainstream scientific community not to mention the World.
El comercio de bienes actual del Perú con la India es menor pero esta creciendo
La inversión de la India en el Perú también muestra signos de aumentar
En junio próximo deben iniciarse en Lima las negociaciones para alcanzar un acuerdo comercial con la India, que abarcaría temas como “aranceles, medidas sanitarias y fitosanitarias, obstáculos técnicos al comercio, inversiones, comercio de servicios, movimiento de personas, cooperación, entre otros“
Plan pour la paix: Pour un renouveau des relations internationalesFlorian Brunner
Dans son Appel du 18 Juin 1940, le Général De Gaulle avait lancé cette affirmation restée légendaire : « Quoi qu'il arrive, la flamme de la résistance française ne doit pas s'éteindre et ne s'éteindra pas. ». Nous devons aujourd’hui continuer à porter cette flamme, à défendre la paix et la liberté, dans un monde ébranlé et mouvant. Après 1945, nous avons créé en Europe, les conditions d’une stabilité durable. Le Prix Nobel de la Paix 2012 avait récompensé l’Union européenne, qui avait su favoriser la construction d’un espace uni et apaisé. Mais si nous avons su associer les peuples européens, nous manquons désormais de résolution pour enrayer le cycle terrible des guerres. Nous devons être portés, par une nouvelle ambition pour la paix. Notre vocation n’est pas de nous aligner, mais d’agir en toute indépendance, au service d’une vision partagée. Alors que le monde semble se perdre, dans une logique de conflits incessants, dont les ficelles sont tirées par des acteurs influents et empressés. Alors que de grands empires activent tous leurs leviers d’influence et d’action, dans une course machinale à la puissance. Alors que l’aventurisme généralisé, paralyse l’expression des intelligences et la recherche d’un véritable sens. Il est temps que la France assume son rôle et son message. Il est temps que l’Union européenne se dote des moyens nécessaires à la conduite d’une véritable action universelle. Il faut une Europe politique, qui s’affirme sur la scène internationale. Il n’y a que l’Europe pour devenir le point d’équilibre d’un nouvel ordre mondial. La France a un rôle déterminant à jouer, dans la construction d’une nouvelle architecture européenne. Elle a aussi une place qui compte dans les relations internationales. C’est à la France et à l’Europe de se remettre en mouvement, pour faire bouger les lignes. Reprendre l’initiative, ouvrir le dialogue, être une force motrice pour l’instauration de la paix. Voilà ce doivent être nos résolutions. Nous ne pouvons plus rester figés dans des schémas dépassés, nous devons recouvrer notre capacité à inventer et à innover. Retrouvons le sens de l’Histoire et portons une véritable vision, au service de la paix et de la réconciliation.
Discover what services we offer here at MagenTys, from BDD and DevOps to Agility Assessments. If you would like to find out more, visit www.magentys.io
Openness in Education, Systems Thinking & Educational Practice Ed Media June ...Anita Zijdemans Boudreau
Openness in education can be illustrated as expressions of iterative socio-technological innovations that reduce barriers and create multiple opportunities for practice. Through the convergence of collective intelligence and ICTs, particularly Internet-based applications, openness has been reincarnated as the “new paradigm of social production in the global knowledge economy” (Peters, 2008, p. 10). The ensuing open education renaissance—proliferated through open source, open access, open content, and MOOCs—has disrupted the insular worldview of the traditional academy and reignited debate about the purpose and future of formal education. This paper suggests that thinking of openness as a system, to examine both the whole and the sum of its parts, can provide a means for adapting and aligning educational practices to the significant shifts occurring outside of institutionalized settings. A literature review and evaluative tool are presented.
A note on rural haats the oldest concept of rural supermarkets.
The author is a marketing expert,specializing emerging class consumer insight and go to market strategy. you can follow him on twitter @val_bhatia
A márkaépítés csúcsa: bekerülni a „love brandek” szűk elitjébe, amit valószínűleg minden branding szakértő életében egyszer legalább át szeretne élni. Áhítattal tanulmányozzuk a néhány kivételes márka történetét, a felfoghatatlan ütemű felfutás mérföldköveit. Apple, Google, Lego, Nike és a többi szerencsés flótás, amelyeknek sikerült eggyé válni a rajongótáborral, együtt lélegezni, együtt élni a törzsközönséggel, részévé lenni a mindennapoknak.
Data Replication Options in AWS (ARC302) | AWS re:Invent 2013Amazon Web Services
One of the most critical roles of an IT department is to protect and serve its corporate data. As a result, IT departments spend tremendous amounts of resources developing, designing, testing, and optimizing data recovery and replication options in order to improve data availability and service response time. This session outlines replication challenges, key design patterns, and methods commonly used in today’s IT environment. Furthermore, the session provides different data replication solutions available in the AWS cloud. Finally, the session outlines several key factors to be considered when implementing data replication architectures in the AWS cloud.
Learn how Maxwell Health Protects its MongoDB Workloads on AWSAmazon Web Services
Maxwell Health, a software-as-a-service healthcare benefits management provider, needed to meet recovery SLAs for MongoDB workloads on Amazon Web Services (AWS). The company turned to Rubrik Datos IO for a modern, scalable, cloud-native backup and recovery solution. Within minutes, Maxwell Health had launched Rubrik Datos IO RecoverX to protect its AWS environment. RecoverX helped Maxwell meet strict backup and recovery SLAs, simplify MongoDB data protection efforts, and save backup storage costs for Amazon S3.
Join our webinar to learn how Rubrik Datos IO enabled Maxwell Health to lower its recovery time by 30 percent and reduce storage costs by 90 percent for its MongoDB backups on AWS.
A presentation on the selection criteria, testing + evaluation and successful, zero-downtime migration to MongoDB. Additionally details on Wordnik's speed and stability are covered as well as how NoSQL technologies have changed the way Wordnik scales.
Nuts and bolts of running a popular site in the aws cloudDavid Veksler
I will share how we develop and host a popular publishing platform in the cloud with a limited budget and technology team.
We'll cover architecture, including a variety of services at Amazon Web Services such as elastic load balancing, S3, Elastic Beanstalk, and RDS in the context of a real site.
We'll cover how we control costs with Spot and burstable instances and scale up with distributed caching.
Finally we'll discuss continuous deployment strategies for Windows and Linux-based cloud applications in the context of a distributed team using an agile process.
Optimizing training on Apache MXNet (January 2018)Julien SIMON
Techniques and tips to optimize training on Apache MXNet
Infrastructure performance: storage and I/O, GPU throughput, distributed training, CPU-based training, cost
Model performance: data augmentation, initializers, optimizers, etc.
Level 666: you should be familiar with Deep Learning and MXNet
Find out more about:
• Techniques and tips to optimize trainingon Apache MXNet
• Infrastructure performance:storage and I/O, GPU throughput, distributed training, CPU-based training, cost
• Model performance:data augmentation, initializers, optimizers, etc.
• Level 666: you should be familiar with Deep Learning and MXNet
Geek Sync | Deployment and Management of Complex Azure EnvironmentsIDERA Software
You can watch the replay of this Geek Sync webinar in the IDERA Resource Center: http://ow.ly/pg7N50A4svf.
Today's data management professional is finding their landscape changing. They have multiple database platforms to manage, multi-OS environments and everyone wants it now.
Join IDERA and Kellyn Pot’Vin-Gorman as she discusses the power of auto deployment in Azure when faced with complex environments and tips to increase the knowledge you need at the speed of light. Kellyn will cover scripting basics, advanced Portal features, opportunities to lessen the learning curve and how multi-platform and tier doesn't have to mean multi-cloud.
Attendees can expect to learn how to build automation scripts efficiently, even if you have little scripting experience, and how to work with Azure automation deployments. This session will allow you to begin building a repository of multi-platform development scripts to use as needed.
About Kellyn: Kellyn Pot’Vin-Gorman is a member of the Oak Table Network and an IDERA ACE and Oracle ACE Director alumnus. She is the newest Technical Solution Professional in Power BI with AI in the EdTech group at Microsoft. Kellyn is known for her extensive work with multi-database platforms, DevOps, cloud migrations, virtualization, visualizations, scripting, environment optimization tuning, automation, and architecture design. She has spoken at numerous technical conferences for Oracle, Big Data, DevOps, Testing and SQL Server. Her blog, http://dbakevlar.com and social media activity under her handle, DBAKevlar is well respected for her insight and content.
이 강연에서는 AWS Big Data 분석 아키텍처 모범 사례를 살펴보고 표준 SQL을 사용해 Amazon S3에 저장된 데이터를 간편하게 분석할 수 있는 대화식 쿼리 서비스인 Amazon Athena의 특징과 최신 기능들에 대하여 고객 사례와 함께 소개드립니다.
연사: Greg Khairallah, 아마존 웹서비스 Amazon Big Data 및 Athena 총괄 사업 개발 매니저
Using AWS for Backup and Restore (backup in the cloud, backup to the cloud, a...Amazon Web Services
Companies are using AWS to create and deploy efficient, fast, and cost-effective backup and restore capabilities to protect critical IT systems without incurring the infrastructure expense of a second physical site. In this session, we will talk about cloud-based services AWS provides to enable robust backup and rapid recovery of your IT infrastructure and data.
University of Alberta migrated their central Learning Management System from Blackboard Vista on Oracle to Moodle on Postgresql 9.0. We went from a pilot project of 13 courses in January 2011 to running all centrally supported courses (3600+) in Moodle in September 2012. Our central Moodle instance has seen more than 500,000 page loads and 24,000 unique visitors in a single day. Over the last two years we have learned a few hard lessons and overcome a few challenges in running Postgresql in a 24x7 production environment.
A brief introduction of different storage options available on AWS platform. And what is the value proposition of AWS in the Disaster Recovery (DR) scenario.
Similar to Moodle is dead... Iain Bruce, James Blair, Michael O'Loughlin (20)
Designing Active Learning in Moodle – a preview of the Learning Designer tools Eileen Kennedy, D. N. Dimakopoulos, Diana Laurillard
Presented at Moodlemoot Edinburgh 2014
www.moodlemoot.ie
Broadening the scope of a Maths module for student Technology teachers Sue Milne, Sarah Honeychurch, Niall Barr
Presented at Moodlemoot Edinburgh 2014
www.moodlemoot.ie
A proposal for integrating Serious Games made with Unity3D into Moodle courses Frank Poschner, Dieter Wloka
Presented at Moodlemoot Edinburgh 2014
www.moodlemoot.ie
The Moodle Gradebook as a tool inducing regular revisions in students' learning process Piotr Jaworski
Presented at Moodlemoot Edinburgh 2014
www.moodlemoot.ie
Using the Moodle Quiz for Formative and Summative Assessment: Safe Exam Browser and Laptops for Assessments Projects Mike Wilson
Presented at Moodlemoot Edinburgh 2014
www.moodlemoot.ie
Many a Mickle Makes a Muckle: A multitude of Moodle mods to enhance the student learning experience Roger Emery, Daran Price
Presented at Moodlemoot Edinburgh 2014 www.moodlemoot.ie
Have you ever wondered how search works while visiting an e-commerce site, internal website, or searching through other types of online resources? Look no further than this informative session on the ways that taxonomies help end-users navigate the internet! Hear from taxonomists and other information professionals who have first-hand experience creating and working with taxonomies that aid in navigation, search, and discovery across a range of disciplines.
This presentation by Morris Kleiner (University of Minnesota), was made during the discussion “Competition and Regulation in Professions and Occupations” held at the Working Party No. 2 on Competition and Regulation on 10 June 2024. More papers and presentations on the topic can be found out at oe.cd/crps.
This presentation was uploaded with the author’s consent.
Acorn Recovery: Restore IT infra within minutesIP ServerOne
Introducing Acorn Recovery as a Service, a simple, fast, and secure managed disaster recovery (DRaaS) by IP ServerOne. A DR solution that helps restore your IT infra within minutes.
0x01 - Newton's Third Law: Static vs. Dynamic AbusersOWASP Beja
f you offer a service on the web, odds are that someone will abuse it. Be it an API, a SaaS, a PaaS, or even a static website, someone somewhere will try to figure out a way to use it to their own needs. In this talk we'll compare measures that are effective against static attackers and how to battle a dynamic attacker who adapts to your counter-measures.
About the Speaker
===============
Diogo Sousa, Engineering Manager @ Canonical
An opinionated individual with an interest in cryptography and its intersection with secure software development.
This presentation, created by Syed Faiz ul Hassan, explores the profound influence of media on public perception and behavior. It delves into the evolution of media from oral traditions to modern digital and social media platforms. Key topics include the role of media in information propagation, socialization, crisis awareness, globalization, and education. The presentation also examines media influence through agenda setting, propaganda, and manipulative techniques used by advertisers and marketers. Furthermore, it highlights the impact of surveillance enabled by media technologies on personal behavior and preferences. Through this comprehensive overview, the presentation aims to shed light on how media shapes collective consciousness and public opinion.
Sharpen existing tools or get a new toolbox? Contemporary cluster initiatives...Orkestra
UIIN Conference, Madrid, 27-29 May 2024
James Wilson, Orkestra and Deusto Business School
Emily Wise, Lund University
Madeline Smith, The Glasgow School of Art
2. The Problem
• On the 30th of July 2013 the VLE, DBA and Network teams of Information
services were invited to a meeting which was to test our Business
continuity with our VLE environment (Moodle). The teams were given the
following scenario.
• All University systems have been shut down due to a full power failure which has
affected both Craiglockhart and Merchiston with no other current services at our
Sighthill campus; this also means that there is no external internet access from inside
the University.
• Moodle as being the most critical system at this time of the year is essential and has
to be back on-line as quickly as possible to allow students access to their current
course work.
3. Team brief
• The team are allowed access to any of the offsite backup systems
(VMs, Data and Databases).
• No University hardware can be used for the process.
• Staff were allowed to use their on-call laptops.
• Staff have access to budget to gain resources if needed.
8. What Solution? (Approx. 2hrs)
• The following decisions were made during the initial Emergency
Incident meeting.
1. We would use Amazon Web Services and create a virtual machine in the
cloud.
AWS provided scalable solutions, import/export options for database
2. We would look to get database and files backup from tape
3. Switch user accounts to manual
4. We discussed how to communicate out to students.
9. Tasks involved
Obtain a database backup (Approx. 4hrs)
a. Installed Zmanda Community Edition to recover MySQL
b. No access to DBA and use of the Zmanda recovery software was
problematic(backup had to match the version of MySQL), meant a Hot
backup was obtained.
c. Truncated logs and statistics to speed up the import
d. Import Database into AWS MySQL database
Obtain Moodle user files from backup (Approx. 5hrs)
a. Backup came from Symantec NetBackup as we found file storage was not
going to tape at the time.
10. Register for Amazon Web Services (AWS) account
a. Purchased AWS Business support.
Setup AWS (Approx. 30mins)
a. Create EC2 instance RHEL 6.4 64bit with 7.5GB RAM, 1TB disk space.
b. Create a Key Pair - AWS uses public-key cryptography to secure the
login information for your instance.
c. Communicated the AWS account credentials to key team members
to allow other aspects of the service to be configured.
Transfer Moodle user files to Amazon (Approx. 7hrs)
a. Initial size of backup was 475GB, this was reduced to 190GB after
removal of duplicate directories and redundant course backup files.
b. Initially tried using WinSCP for transfer, this was going to take 14hrs,
switched to RSYNC transfer complete in 6hrs.
Tasks involved
11. Tasks involved
Obtain a database backup (Approx. 4hrs)
a. Installed Zmanda Community Edition to recover MySQL
b. No access to DBA and use of the Zmanda recovery software was problematic(backup had to match the version of mysql), meant a
Hot backup was obtained.
c. Truncated logs and statistics to speed up the import
d. Import Database into AWS MySQL database
Obtain Moodle user files from backup (Approx. 5hrs)
a. Backup came from Symantec NetBackup as we found file storage was not going to tape at the time.
Register for Amazon Web Services (AWS) account
a. Purchased AWS Business support.
Setup AWS (Approx. 30mins)
a. Launch Amazon console in correct region.
b. Create EC2 instance RHEL 6.4 64bit with 7.5GB RAM, 1TB disk space.
c. Communicated the AWS account credentials to key team members to allow other aspects of the service to be configured.
Transfer Moodle user files to Amazon (Approx. 7hrs)
a. Initial size of backup was 475GB, this was reduced to 190GB after removal of duplicate directories and redundant course backup
files.
b. Initially tried using WinSCP for transfer, this was going to take 14hrs, switched to RSYNC transfer complete in 6hrs.
12. Tasks involved
Setup AWS (Approx. 30mins)
a. Create EC2 instance RHEL 6.4 64bit with 7.5GB RAM, 1TB disk space.
b. Create a Key Pair - AWS uses public-key cryptography to secure the login
information for your instance.
c. Communicated the AWS account credentials to key team members to
allow other aspects of the service to be configured.
Transfer Moodle user files to Amazon (Approx. 7hrs)
a. Initial size of backup was 475GB, this was reduced to 190GB after removal
of duplicate directories and redundant course backup files.
b. Initially tried using WinSCP for transfer, this was going to take 14hrs,
switched to RSYNC transfer complete in 6hrs.
13. Tasks involved
Obtain a database backup (Approx. 4hrs)
a. Installed Zmanda Community Edition to recover MySQL
b. No access to DBA and use of the Zmanda recovery software was problematic(backup had to match the version of MySQL), meant a
Hot backup was obtained.
c. Truncated logs and statistics to speed up the import
d. Import Database into AWS MySQL database
Obtain Moodle user files from backup (Approx. 5hrs)
a. Backup came from Symantec NetBackup as we found file storage was not going to tape at the time.
Register for Amazon Web Services (AWS) account
a. Purchased AWS Business support.
Setup AWS (Approx. 30mins)
a. Create EC2 instance RHEL 6.4 64bit with 7.5GB RAM, 1TB disk space.
b. Create a Key Pair - AWS uses public-key cryptography to secure the login information for your instance.
c. Communicated the AWS account credentials to key team members to allow other aspects of the service to be configured.
Transfer Moodle user files to Amazon (Approx. 7hrs)
a. Initial size of backup was 475GB, this was reduced to 190GB after removal of duplicate directories and redundant course backup
files.
b. Initially tried using WinSCP for transfer, this was going to take 14hrs, switched to RSYNC transfer complete in 6hrs.
14. Tasks involved
Setup AWS (Approx. 30mins)
a. Create EC2 instance RHEL 6.4 64bit with 7.5GB RAM, 1TB disk space.
b. Create a Key Pair - AWS uses public-key cryptography to secure the login
information for your instance.
c. Communicated the AWS account credentials to key team members to
allow other aspects of the service to be configured.
Transfer Moodle user files to Amazon (Approx. 7hrs)
a. Initial size of backup was 475GB, this was reduced to 190GB after removal
of duplicate directories and redundant course backup files.
b. Initially tried using WinSCP for transfer, this was going to take 14hrs,
switched to RSYNC transfer complete in 6hrs.
15. Tasks involved continued...
Installation & Configuration of Services (Approx. 2hrs)
a. Apache, PHP & MySQL Installed, services started.
b. Installation of GIT
c. Cloned our Moodle code from Git Hub where all our commits are backed
up automatically.
d. Backed up ignored files separately, for example config.php
e. Ensure Apache user has correct permissions on directory.
f. Alter Moodle config.php with new database credentials.
16. Tasks involved continued...
Moodle Administration (Approx. 1hr)
a. Recreated Moodle admin account
b. Switch Moodle user accounts to ‘Manual’ and regenerate passwords
c. Successfully access Moodle user accounts and data on AWS, various
admin/teacher/student accounts accessed and tested.
18. Lessons Learned
• Backups were only stored on high availability disks across two campuses
• Recommend
• Backup to tape every 2 weeks.
• Passwords for a number of the services were only with the DBAs
• Recommend
• Storing of passwords in the same way across the teams.
• Access to all passwords for the teams.
• Further Backups
• Recommend
• Additional Emergency MySQL dump
• Filter out redundant files from backup to reduce time
• Migrate only necessary tables
• Search and replace text inside sql file before import (mainly for hardcoded urls)
• Replicate data up to Cloud
19. Lessons Learned continued…
• SMS students about the downtime of the system
• Recommend
• Storing of student SMS/phone details off campus or on on-call laptops
• Moodle
• Recommend
• Create basic scripts for the processes against the database on user accounts
• Active directory is one of the main services to allow our system to work
• Recommend
• Store an active directory server in the cloud or at another universities network
20. Going forward
• Have a cloud Active Directory solution available.
• Have a parked AWS instance available when required.
• Use Vagrant/PuPHPet GUI to have consistent setup and manage virtual machines
21. Useful links
• Amazon Web Services - http://aws.amazon.com/
• Windows Azure (Cloud AD) - http://www.windowsazure.com/en-us/
• Vagrant/PuPHPet - https://puphpet.com/