Cloudscaling has been working with KT to build infrastructure cloud services for telcos and service providers using OpenStack Object Storage. They have helped KT launch object storage systems based on Swift and end-user cloud products. Building these infrastructure services requires integrating hardware, software, and operational components and considering aspects like billing, authentication, load balancing, and networking. OpenStack Object Storage provides a solid foundation but additional services need to be developed to fully support customers.
(ENT222) Reduce Business Cost and Risk with Disaster Recovery for AWS | AWS r...Amazon Web Services
Given the distributed nature of today's workforce, many IT organizations must support branch offices and remote sites. These multiple sites create islands of infrastructure that are necessary to meet local performance and reliability needs, but are costly to manage and increase the risks associated with distributed data. Consolidation is key to reducing costs and eliminating risks, but how do customers leverage the power of AWS as part of this consolidation? Riverbed SteelFusion is a converged infrastructure solution, encompassing server, projected storage, networking, and WAN optimization. When combined with AWS Storage Gateway, SteelFusion allows customers to connect their on-premises infrastructure to AWS. Session attendees will learn how to leverage WAN Optimization and Projected Storage technologies as part of their IT strategy to consolidate and provide disaster recovery for branch offices and remote sites.
Sponsored by Riverbed.
Catalogic ECX: Copy Data Management for InterSystems Caché and Epic EHRCatalogic Software
Copy Data Management is fast becoming a must-have solution for any Epic EHR environment. Catalogic ECX provides native application aware integration with InterSystems Caché and Clarity databases (SQL), VMs and file systems to automate the creation and use of Caché database copies — snapshots, clones and replicas – on your existing enterprise storage infrastructure. This allows you to meet Epic protection and recovery requirements as well as provide quick, easy and secure access to clones or full copies for development, testing, release, MDR, SUP, Build and training environments. ECX supports Caché on Red Hat Linux (virtual and physical) and AIX.
(ENT222) Reduce Business Cost and Risk with Disaster Recovery for AWS | AWS r...Amazon Web Services
Given the distributed nature of today's workforce, many IT organizations must support branch offices and remote sites. These multiple sites create islands of infrastructure that are necessary to meet local performance and reliability needs, but are costly to manage and increase the risks associated with distributed data. Consolidation is key to reducing costs and eliminating risks, but how do customers leverage the power of AWS as part of this consolidation? Riverbed SteelFusion is a converged infrastructure solution, encompassing server, projected storage, networking, and WAN optimization. When combined with AWS Storage Gateway, SteelFusion allows customers to connect their on-premises infrastructure to AWS. Session attendees will learn how to leverage WAN Optimization and Projected Storage technologies as part of their IT strategy to consolidate and provide disaster recovery for branch offices and remote sites.
Sponsored by Riverbed.
Catalogic ECX: Copy Data Management for InterSystems Caché and Epic EHRCatalogic Software
Copy Data Management is fast becoming a must-have solution for any Epic EHR environment. Catalogic ECX provides native application aware integration with InterSystems Caché and Clarity databases (SQL), VMs and file systems to automate the creation and use of Caché database copies — snapshots, clones and replicas – on your existing enterprise storage infrastructure. This allows you to meet Epic protection and recovery requirements as well as provide quick, easy and secure access to clones or full copies for development, testing, release, MDR, SUP, Build and training environments. ECX supports Caché on Red Hat Linux (virtual and physical) and AIX.
Symantec NetBackup 7.6 benchmark comparison: Data protection in a large-scale...Principled Technologies
The footprint of a VM can grow quickly in an enterprise environment and large-scale VM deployments in the thousands are common. As this number of deployed systems grows, so does the risk of failure. Critical failures can become unavoidable and offering data protection from a backup solution promotes business continuity. Elongated protection windows requiring multiple jobs of different types can create resource contention with production environments and may require valuable IT admin time, so a finite window for system backups can have plenty of importance.
In our hands-on SAN backup testing, the Symantec NetBackup Integrated Appliance running NetBackup 7.6 offered application protection to 1,000 VMs in 66.8 percent less time than Competitor “E” did. In addition, the Symantec NetBackup Integrated Appliance with NetBackup 7.6 created backup images that offered granular recovery without additional steps. These time and effort savings can scale as your VM footprint grows, allowing you to execute both system protection and user-friendly, simplified recovery.
Slow performance and unavailable critical applications can impinge a company’s progress. You can apply patches and updates to improve application quality and user experience, but these changes need to be tested in resource-intensive environments before deployment. Keeping these applications accessing data is vital, too, as on-premises events can put availability at risk.
Our Dell EMC VMAX 250F and PowerEdge server solution supported test/dev environments and production database applications simultaneously without affecting the production applications’ performance. As we added VMs designed for test/dev environments, the production workload maintained an acceptable level of IOPS and achieved an average storage latency of less than a millisecond. The solution also kept data highly available with no downtime and no performance drop when we initiated a lost host connection for the primary storage. To run critical database applications of your company, consider the Dell EMC VMAX 250F for your datacenter.
Your datacenter is capable of doing great things—if you let it. Upgrades from Intel for compute, storage, and networking components can help your business support new services and expand your customer base. In our hands-on testing, we found that new Intel processors, high-bandwidth network components, and SATA or PCIe SSDs working together can boost your datacenter’s capabilities, which could translate to better business operations for your organization.
We will examine most of the features that this “Swiss knife” software provides. It is an in-memory fabric that fits between the database and the application layer. Apache Ignite is powered by the H2 engine. They have used it to create an in-memory distributed ACID, fully ANSI-99 complaint, Highly Available (HA) and scalable database. They have used a non-consensus (https://en.wikipedia.org/wiki/Rendezvous_hashing) clustering algorithm to be even more scalable compared to other NoSql solutions. This tool respects the relational data model that we have used for so many years and eliminates traditional problems like the “expensive joins” since it uses the RAM as the primary storage medium. We will see what this tool can do in action through hands-on examples.
Best Practices for Backup and Recovery: Windows Workload on AWS Amazon Web Services
Backing up Windows workloads can be a challenge, and cumbersome for many companies. Backup and recovery for Windows workloads on AWS, however, can be easy. This session will cover best practices for backup and recovery, how to configure Windows workloads to back up to AWS; pitfalls to look out for; and recommended reference architectures.
A company’s success depends on critical application performance and availability. Upgrades and patches can improve application efficiency and user experience, but making the necessary changes requires resource intensive environments to test updates before deploying them. What’s more, these applications need to continue accessing data even in the event of an on-premises crisis.
Our Dell EMC VMAX 250F and PowerEdge server solution supported test/dev environments and production database applications simultaneously without affecting the production applications’ performance. Storage latency for the VMAX 250F peaked at a millisecond in our testing while IOPS stayed within an acceptable range. The solution also kept data highly available with no downtime or performance drop when we initiated a lost host connection for the primary storage. Consider the Dell EMC VMAX 250F array for your datacenter to support the critical database applications that drive your company.
You will learn how to create file archives, upload them to Amazon S3, and manage permissions and lifetimes, giving you the ability to back up any amount of data and to retain it for as long as you'd like. A number of open source and commercial backup and archiving tools will be demonstrated, as time permits.
You will also learn how to use built-in AWS facilities to quickly and easily create and restore snapshots of entire disk volumes.
This session covers IBM Spectrum Scale and how it can run in various Cloud Service Provider environments like IBM Cloud or Amazon Web Services. This was presented at IBM TechU in Johannesburg, South Africa September 2019
SQL Server 2016 database performance on the Dell EMC PowerEdge FC630 QLogic 1...Principled Technologies
Upgrading the hardware running your SQL Server to a space-efficient modular Dell EMC modern environment can help your company achieve a great deal of database work in a small amount of space. With the Dell Express Flash technology, adding a caching solution such as Samsung AutoCache can make the environment even more efficient.
In the PT labs, we ran a mixed database workload on six Dell EMC PowerEdge FC630 servers, powered by Intel Xeon E5-2667 processors, in three PowerEdge FX2 enclosures. The solution included the QLogic QLE2692 16Gb FC adapter with StorFusion Technology, Dell EMC Storage SC9000 all-flash storage, and Dell EMC PowerEdge Express Flash NVMe Performance PCIe SSDs.
With no caching solution, the 36 SQL Server 2016 VMs on the six servers achieved a total of 431,839 orders per minute while an Oracle workload ran on 12 VMs. When we added a caching solution to accelerate the SQL database volumes, the performance across the 36 SQL Server 2016 VMs doubled to 871,580. These numbers show the power of server-side caching to alleviate pressure on the storage array allowing you to get even more out of the Dell EMC modern environment.
Learn how AWS customers save money, time, and effort by using AWS's backup and archive services. Organizations of all sizes rely on AWS's services to durably safeguard their data off-premises at a surprisingly low cost. This session will illustrate backup and archive architectures that AWS customers are benefitting from today.
CMPE 297 Lecture: Building Infrastructure Clouds with OpenStackJoe Arnold
Lecture for the San Jose State masters program on cloud computing. Topic focuses on using OpenStack to deploy infrastructure clouds with commodity hardware and open source software. Covers virtualization, networking, storage, deployment and operations.
Symantec NetBackup 7.6 benchmark comparison: Data protection in a large-scale...Principled Technologies
The footprint of a VM can grow quickly in an enterprise environment and large-scale VM deployments in the thousands are common. As this number of deployed systems grows, so does the risk of failure. Critical failures can become unavoidable and offering data protection from a backup solution promotes business continuity. Elongated protection windows requiring multiple jobs of different types can create resource contention with production environments and may require valuable IT admin time, so a finite window for system backups can have plenty of importance.
In our hands-on SAN backup testing, the Symantec NetBackup Integrated Appliance running NetBackup 7.6 offered application protection to 1,000 VMs in 66.8 percent less time than Competitor “E” did. In addition, the Symantec NetBackup Integrated Appliance with NetBackup 7.6 created backup images that offered granular recovery without additional steps. These time and effort savings can scale as your VM footprint grows, allowing you to execute both system protection and user-friendly, simplified recovery.
Slow performance and unavailable critical applications can impinge a company’s progress. You can apply patches and updates to improve application quality and user experience, but these changes need to be tested in resource-intensive environments before deployment. Keeping these applications accessing data is vital, too, as on-premises events can put availability at risk.
Our Dell EMC VMAX 250F and PowerEdge server solution supported test/dev environments and production database applications simultaneously without affecting the production applications’ performance. As we added VMs designed for test/dev environments, the production workload maintained an acceptable level of IOPS and achieved an average storage latency of less than a millisecond. The solution also kept data highly available with no downtime and no performance drop when we initiated a lost host connection for the primary storage. To run critical database applications of your company, consider the Dell EMC VMAX 250F for your datacenter.
Your datacenter is capable of doing great things—if you let it. Upgrades from Intel for compute, storage, and networking components can help your business support new services and expand your customer base. In our hands-on testing, we found that new Intel processors, high-bandwidth network components, and SATA or PCIe SSDs working together can boost your datacenter’s capabilities, which could translate to better business operations for your organization.
We will examine most of the features that this “Swiss knife” software provides. It is an in-memory fabric that fits between the database and the application layer. Apache Ignite is powered by the H2 engine. They have used it to create an in-memory distributed ACID, fully ANSI-99 complaint, Highly Available (HA) and scalable database. They have used a non-consensus (https://en.wikipedia.org/wiki/Rendezvous_hashing) clustering algorithm to be even more scalable compared to other NoSql solutions. This tool respects the relational data model that we have used for so many years and eliminates traditional problems like the “expensive joins” since it uses the RAM as the primary storage medium. We will see what this tool can do in action through hands-on examples.
Best Practices for Backup and Recovery: Windows Workload on AWS Amazon Web Services
Backing up Windows workloads can be a challenge, and cumbersome for many companies. Backup and recovery for Windows workloads on AWS, however, can be easy. This session will cover best practices for backup and recovery, how to configure Windows workloads to back up to AWS; pitfalls to look out for; and recommended reference architectures.
A company’s success depends on critical application performance and availability. Upgrades and patches can improve application efficiency and user experience, but making the necessary changes requires resource intensive environments to test updates before deploying them. What’s more, these applications need to continue accessing data even in the event of an on-premises crisis.
Our Dell EMC VMAX 250F and PowerEdge server solution supported test/dev environments and production database applications simultaneously without affecting the production applications’ performance. Storage latency for the VMAX 250F peaked at a millisecond in our testing while IOPS stayed within an acceptable range. The solution also kept data highly available with no downtime or performance drop when we initiated a lost host connection for the primary storage. Consider the Dell EMC VMAX 250F array for your datacenter to support the critical database applications that drive your company.
You will learn how to create file archives, upload them to Amazon S3, and manage permissions and lifetimes, giving you the ability to back up any amount of data and to retain it for as long as you'd like. A number of open source and commercial backup and archiving tools will be demonstrated, as time permits.
You will also learn how to use built-in AWS facilities to quickly and easily create and restore snapshots of entire disk volumes.
This session covers IBM Spectrum Scale and how it can run in various Cloud Service Provider environments like IBM Cloud or Amazon Web Services. This was presented at IBM TechU in Johannesburg, South Africa September 2019
SQL Server 2016 database performance on the Dell EMC PowerEdge FC630 QLogic 1...Principled Technologies
Upgrading the hardware running your SQL Server to a space-efficient modular Dell EMC modern environment can help your company achieve a great deal of database work in a small amount of space. With the Dell Express Flash technology, adding a caching solution such as Samsung AutoCache can make the environment even more efficient.
In the PT labs, we ran a mixed database workload on six Dell EMC PowerEdge FC630 servers, powered by Intel Xeon E5-2667 processors, in three PowerEdge FX2 enclosures. The solution included the QLogic QLE2692 16Gb FC adapter with StorFusion Technology, Dell EMC Storage SC9000 all-flash storage, and Dell EMC PowerEdge Express Flash NVMe Performance PCIe SSDs.
With no caching solution, the 36 SQL Server 2016 VMs on the six servers achieved a total of 431,839 orders per minute while an Oracle workload ran on 12 VMs. When we added a caching solution to accelerate the SQL database volumes, the performance across the 36 SQL Server 2016 VMs doubled to 871,580. These numbers show the power of server-side caching to alleviate pressure on the storage array allowing you to get even more out of the Dell EMC modern environment.
Learn how AWS customers save money, time, and effort by using AWS's backup and archive services. Organizations of all sizes rely on AWS's services to durably safeguard their data off-premises at a surprisingly low cost. This session will illustrate backup and archive architectures that AWS customers are benefitting from today.
CMPE 297 Lecture: Building Infrastructure Clouds with OpenStackJoe Arnold
Lecture for the San Jose State masters program on cloud computing. Topic focuses on using OpenStack to deploy infrastructure clouds with commodity hardware and open source software. Covers virtualization, networking, storage, deployment and operations.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Deep Dive on ElasticSearch Meetup event on 23rd May '15 at www.meetup.com/abctalks
Agenda:
1) Introduction to NOSQL
2) What is ElasticSearch and why is it required
3) ElasticSearch architecture
4) Installation of ElasticSearch
5) Hands on session on ElasticSearch
Building an ETL pipeline for Elasticsearch using SparkItai Yaffe
How we, at eXelate, built an ETL pipeline for Elasticsearch using Spark, including :
* Processing the data using Spark.
* Indexing the processed data directly into Elasticsearch using elasticsearch-hadoop plugin-in for Spark.
* Managing the flow using some of the services provided by AWS (EMR, Data Pipeline, etc.).
The presentation includes some tips and discusses some of the pitfalls we encountered while setting-up this process.
OpenStack is an open source cloud project and community with broad commercial and developer support. OpenStack is currently developing two interrelated technologies: OpenStack Compute and OpenStack Object Storage. OpenStack Compute is the internal fabric of the cloud creating and managing large groups of virtual private servers and OpenStack Object Storage is software for creating redundant, scalable object storage using clusters of commodity servers to store terabytes or even petabytes of data. In this tutorial, Bret Piatt will explain how to deploy OpenStack Compute and Object Storage, including an overview of the architecture and technology requirements.
SNIA : Swift Object Storage adding EC (Erasure Code)Odinot Stanislas
In depth presentation on EC integration in Swift object storage. Content delivered by Paul Luse, Sr. Staff Engineer @ Intel and Kevin Greenan, Staff Software Engineer - Box during fall SNIA event
Entenda como o MySQL é parte fundamental do OpenStack e perceba a excelente oportunidade de usar o MySQL como Serviço (DBaaS) numa cloud privada ou pública com API padronizada.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
1. Commercialization of OpenStack: Object Storage
April 26, 2010
Joe Arnold, Cloudscaling
Dr. Jinkyung Hwang, KT
Dr. Jaesuk Ahn, KT
Wednesday, April 27, 2011
2. Building cloud infrastructure for
telcos and service providers
Wednesday, April 27, 2011
- Thanks to the core Swift team. They've been invaluable in sharing their knowledge about the system.
- We've brought to market several OpenStack Object Storage systems for our customers. We’re leading
the charge on large-scale deployments of OpenStack Object Storage.
- Our focus is on building infrastructure cloud services for telcos and service providers. To do this we've
focused on integrating the hardware, software and operational components so that our customers can
go to market with a fully-integrated stack.
3. •Cloud Visionaries
•Infrastructure Cloud Services
•End-user Cloud Products
•Very involved in Korean OpenStack Community
Wednesday, April 27, 2011
- KT has been visionaries in the cloud computing space.
- Cloudscaling has been working with KT for about a year. In that time, Cloudscaling has
helped KT launch infrastructure compute clouds including an object storage system based on
Swift.
- Released end-user cloud products
- Kicked-off Korean OpenStack Community
4. Billions of Objects in S3
300
225
150
75
Q4 2006
Q4 2007
Q4 2008
Q4 2009 0
Q4 2010
Wednesday, April 27, 2011
- Storage is growing.
- Applications are sprouting up for Tablets/Games/mobile devices. That application data is
living in the cloud
- Media consumption over the internet is increasing. Volume of that data is increasing.
- Need for asset storage is large.
- Users are participating and consuming more than they ever have. Social media, online
video, user-generated content are all contributing to the vast need for easily-consumable
storage systems.
Today’s storage systems need to supply endless storage.
Rackspace runs billions of objects and petabytes of files.
Clearly there is demand for these types of services.
6. Object Storage
API
Data Storage
Wednesday, April 27, 2011
- objects via HTTP
- Not traditional filesystem
- not blocks
- GET/PUT/Delete over REST API
- Object storage is not a traditional filesystem, or a raw block device.
- It’s just containers (folders) and objects (files) that’s available via an HTTP API.
- It can’t be mounted like a folder in your OS directly.
- There isn’t random-access to files and there can be multiple concurrent writers, so it’s
unsuitable for transactional applications like traditional relational databases. Also, it doesn’t
provide raw data blocks that an operating system can form into a filesystem, so it’s unsuitable
for booting an OS.
- Applications need to be designed with object storage in mind. As object storage is partition
tolerant, it’s not possible to create file-system locks. The newest file wins. Applications need
to be designed designed with this in mind.
7. Upload
PUT
Data Storage
Wednesday, April 27, 2011
- A simplified view of of upload.
- A client makes a REST API request to PUT an object into an existing Container. The request
is received by the cluster.
- The data then is sent to three locations in the cluster. At least two of the three writes must
be successful before the client is notified that the upload was successful.
8. Download
GET
Data Storage
Wednesday, April 27, 2011
- A request comes in for an Account/Container/Object. One of the location is determined. A
lookup in the Ring reveals which storage nodes contain that partition. A request is made to
one of the storage nodes to fetch the object and if that fails, requests are made to the other
nodes.
9. Horizontal Growth & Concurrency
Wednesday, April 27, 2011
- OpenStack Object Storage is designed to have linear growth characteristics. As the system
gets larger and requests increase, the performance doesn’t degrade. To scale up, the system
is designed to grow where needed — adding storage nodes to increase storage capacity,
adding compute capacity as requests increase and growing network capacity where there are
choke points.
- Space available isn’t a useful statistic alone. A key benchmark is the storage system’s
concurrency. Swift it able to be configured to handle a great number of simultaneous
connections.
- It’s great to have the ability to scale the storage system as your customers’ applications
grow.
10. Fantastic Durability/Availability properties
Durability - Data Persists Availability - Access to the data
Auditors Shared-nothing access tier
Replicators Data served by any Zone
Independent Zones
Wednesday, April 27, 2011
- Durability:
- As we all know, the 2nd worst thing you can do in this business is loose someone’s data.
The first, or course, being to corrupt customer’s data. Durability refers to the systems ability
to not lose or corrupt data.
- These systems are extremely durable. To achieve extreme durability numbers,
-- objects are distributed in triplicate across the cluster.
-- Auditors run to ensure the integrity of data to check for bitrot.
-- Replicators run to ensure that enough copies are in the cluster. In the event that a device
fails, data is replicated throughout the cluster to ensure there remains three copies.
- Availability: Ability to for the data to be accessed.
- The servers that handle incoming api requests scale up just like any “front-end” tier for a
web application. The system is architected to use a shared-nothing approach and can use the
same proven techniques that have been used to provide high-availability by many web
applications.
- Early in a client deployment we went into pre-production (closed BETA) without monitoring
and a server had failed without noticing it. There was no service interruption and Swift
dutifully replicated data across to other nodes to keep 3 copies of data in place. We finally
noticed when peak throughput numbers weren’t quite as high as they were previously. This
really points out the robustness of the Swift architecture.
11. Zones: Failure Boundaries
1 2 3
Wednesday, April 27, 2011
- Another feature is the ability to define failure zones. Failure zones allow a cluster to be
deployed across physical boundaries which could individually fail. For example, a cluster
could be deployed across several, nearby data centers and be able to survive multiple
datacenter failures.
- 3 copies of each bit of data is distributed across zones
- We go for rack-per-zone. That means we plan for rack outages of storage servers.
- At Swift’s smallest, a zone could be a single drive or a grouping of a few drives. This scale
of deployment is quite useful for creating development / staging environments.
12. Five Zones
1 2 3 4 5
Wednesday, April 27, 2011
How this translates into a deployment-
- Everything in Swift is stored, by default, three times. There are three copies of just about
everything the system needs to store data.
- In order for three copies of the data be stored, at first blush, it seems like it would make
sense for there to be three zones. However, Swift is designed to be a durable, highly-available
system. It needs its three copies of everything – at all times.
- If a Zone goes down from a three-Zone system, there will only be two zones left!
- Five Zones is recommended as a starting point because if a Zone goes down, there will be
other zones for data to be replicated to. Having at least five zones leaves enough wiggle room
to accommodate the occasional Zone failure and enough capacity to replicate data across the
system.
14. Wednesday, April 27, 2011
- I’d like to recommend OpenStack Object Storage (Swift), what else?
- The software that has been battle tested by the huge deployment at Rackspace. Billions of
objects & Petabytes of storage.
- Something is never ‘proven’ until it’s running at scale. So, by that measure, Rackspace
Cloud Files (and Swift), which is known to be proven. No other object storage type system
available is proven deployed at this scale.
- We at Cloudscaling have been working with Swift from it’s initial launch in July of last year.
- Now, with KT and other commercial installations, momentum is building behind this project.
- who should? what does it look like? What should you know going in?
15. Storage is Not an Island
Wednesday, April 27, 2011
- Must have a reason to offer storage
- Storage is an anchor service
- Grounded with other compelling services where storage is a component.
- Data is sticky. Application migration is easy. Data migration is tricky. Moving data around is
difficult, often requires downtime, or is tricker to orchestrate.
-- Bring customer data into your ecosystem/platform.
- AWS S3 offered free TX-in for a very long time. Offers low cost physical media moving, so
that it can get as much of customers data in their ecosystem as possible.
-- S3 grew like crazy with EC2 right next door with 150% y/o/y growth. This is staggering.
- When building a storage product, there must be /compelling/ reasons for customers to put
data into it.
-- That can be:
--- convenience
--- access to compute resources
--- features associated with the uploaded data (transcoding, data processing)
--- even legal or compliance reasons.
16. Have an Advantage
leading South Korean landline,
mobile, internet, IPTV.
Wednesday, April 27, 2011
What's your unfair advantage?
- KT is the leading provider in Korea for internet, mobile, and IPTV.
- They have a huge network advantage for providing services to end-users.
- Not only that, South Korea intends to connect every home in the country with gigabit
speeds. http://www.nytimes.com/2011/02/22/technology/22iht-broadband22.html
- KT is in a unique position from a network prospective to offer the platform of services to
serve this market. The media assets, consumer media assets, need a place to reside that is
well connected to the Korean consumer of these services. There is a distinct edge that
regional service providers have an edge in providing services to their local market.
- Other, out of country providers won’t have the same cost-advantages or quality of services
for that market.
- Other unique assets from some of our other customers include
-- Colocation facilities and an existing customer base of managed hosting customers
-- Extensive CDN services. Object Storage serves as a jumping-off point for CDN services.
17. Be Compatible
Wednesday, April 27, 2011
- The contrarian point here is that for all the advantages you are going to present to your
users, the service needs to remain compatible with the tooling ecosystem.
- At one client meeting we were going down the path of 'differentiation' -- What makes this
product unique? The answer of course, was -- nothing! That's the point. In fact, we've been
working hard to make sure that you are compatible with the ecosystem of tools that are
available for end-users of the service.
-- We've worked with and contributed back to the open-source libraries
-- We've worked with OpenStack vendors like Nasuni and Gladinet to make our 'outside of
Rackspace' implementations work.
- What is distinct is the bundle services that you provide your customers, the customer base
that you already have, the network access that you enjoy.
- one of the huge assets that OpenStack brings is the ecosystem of tools that come to the
party
-- Commercial vendors such as Nasuni and Gladinet
-- OpenSource tools such as Cyberduck, fog, and Rackspace's own Cloudfiles language
bindings/libraries for C#, java, ruby, php, python.
- You don't need to build these per-se... but you do need to ensure compatibility with your
service.
-- Lots of little issues that needed to be addressed (adding alternate Cloudfiles urls, fixing
port issues with cyberduck, ssl cert issues with Gladinet, different format of keys, user
names, passwords) So you will need to make sure that these tools are compatible with your
deployment.
- The differentiation is still important! Differentiation should be in providing services on top
of the infrastructure and building platform services or other infrastructure services based on
storage.
18. Online Service Providers / Private
Huge Flat Namespace
'Repatriation' from public clouds
Wednesday, April 27, 2011
I know that this is the service providers track. But it's worthwhile to address folks who are
building online services or who have a need to provide private solutions.
Huge Flat Namespace
- Accounts -> Containers -> Objects
- Proliferation of storage systems requires knowledge of what data is located where. The
extreme scaling options of Swift can solve some of these issues.
- Each storage cluster can grow to be several petabytes, and for regional or additional scaling
the authentication service can route users to different clusters if need be.
'Repatriation' from public clouds
- For those who are thinking about bringing their data back in house, using an
architecturally-compatible system to the popular cloud storage products out there like S3
and CloudFiles can make a lot of sense.
- For the major reason that an application doesn't need to be re architected.
- Using something that still delivers the durability and reliability and not just API compatible.
20. Building the System
Ecosystem
Billing Portal
Authentication
Installer Front-End
Network Ops
Hardware
Data Center
Wednesday, April 27, 2011
You must build it
- Development effort. So you must consider the R&D expense.
- Ramp-up a development team to understand the core of swift
- Development of integration components
OpenStack Object Storage provides a core of services and functionality.
- You can't just sudo apt-get install openstack
- OpenStack Object Storage is a solid foundation. But must be supported by a host of
services.
Let’s go into a few.
21. Billing
Wednesday, April 27, 2011
Billing
- There is utilization tracking as part of Swift in the Cactus release. It's much better, but it's
still 'tricky'.
- Many steps involved here. I'll address two things that I think are unique to the object
storage system.
- Charge per GB Stored
- Charge for TX ingress/egress
- Charge for # of API requests
22. Pricing
Consumption Pricing Capacity Pricing
vs
Wednesday, April 27, 2011
Further, there is a decision to be made on consumption-pricing vs. capacity pricing.
When you typically go to buy bandwidth, you are charged 95-percentile. You pay for
bandwidth that goes unused because you’re paying for the capacity to be available with some
wiggle-room for extra-ordinary bursts.
So service providers are having to figure out how to deal with this.
It’s a bigger deal at a smaller scale. A single customer could but-in and consume a large
amount of a cluster on a percentage basis.
23. Authentication & User Management
Wednesday, April 27, 2011
Authentication & User Management
- Two real options.
- 1) Use the existing authentication service that is built into Swift. Swift comes with an
authentication service that stores account information within the cluster itself.
-- Benefit is that it the cluster is more self-contained and not dependent on any external
services that could result in availability issues for your customers.
-- However, that means integration. If you're supporting a large customer base that has
access to other services and you want a way to centralize that so that the customers
accounts/credentials/authentication credentials are manageable. More integration effort is
required.
- 2) Build your own authentication service. There is an API defined. Build to that spec, make
sure it's scale-out / HA properties are something that you're comfortable with.
-- Benefit is that an authentication system remains centralized and can service a range of
services for the customer. If this is part of a larger IT initiative or part of a broader cloud
computing offering, it’s desirable to provide end-users with a consistent way to manage and
use credentials.
-- Downside is that it's another component to build and maintain.
24. Load Balancing
Wednesday, April 27, 2011
Load Balancing
- One of the great properties about the architecture of Swfit is it's ability to horizontally
scale-out to handle increasing API access (GET/PUT/DELETE)
- An incoming request does not need to be processed by a centralized storage controller.
- Load balancing can handled by many mechanisms that have been refined over the past 15
years
- The complexity of this setup will vary with the needs of the deployment. It can be as simple
as using round-robin DNS or using Pound to using commercial load balancing solutions like a
Netscaler. For whatever load balancer is used, a health check needs to written for the load
balancer to monitor.
25. Storage Nodes
24-48 GB RAM
36-48, 2TB Drives
SATA
No RAID
Newish Xeon
Wednesday, April 27, 2011
The Hardware
Storage Nodes
- 36-48 disk JBODs
- 24-48 GB RAM
- Go for good price/performance CPUs - Xeon E5620s / E5640s.
-- Not just data, also replicators, auditors
- While commodity, these are not JBODs (Just a Bunch of Disks). There is a reasonable
amount of memory and CPU. Metadata needs to be readily available to quickly return objects.
The object stores each run services not only to field incoming requests from the Access Tier,
but to also run replicators, auditors, reapers, etc.
- Our configurations currently run 2TB disks, SATA disks without RAID. We use desktop-
grade drives where we have more-responsive remote hands in the datacenter and enterprise-
grade drives elsewhere.
- SATA desktop drives (not green drives).
-- We placed an order with another drive vendor (who will go nameless). Based on the order
size of ~$300k worth of drives, one hard drive vendor refused to fill the order because to
them it was obvious that we were not using them for desktop application.
26. Proxy Nodes
Proxy Servers
Authentication Servers
24 GB RAM
10 GbE
Newish Xeon
Wednesday, April 27, 2011
Proxy Nodes
- Go for “sweet spot” in price/performance (Xeon E5620s / E5640s). As it's better to have
many of them and scale out, than have fewer monster machines.
- Dual 10GbE
- 12-44GB RAM
- Cloudscaling’s deployments segment off an “Access Tier”. This tier is the “Grand Central” of
the Object Storage system. It fields incoming API requests from clients and moves data in and
out of the system. This tier is composed of front-end load balancers, ssl-terminators,
authentication services, and it runs the Proxy server processes.
- These access servers are in their own tier. This enables read/write access to be scaled-out
independently of storage capacity. For example, if the cluster is on the public internet with
demanding needs on ssl-termination and data access, many access servers can be
provisioned. However, if the cluster is on a private network and it is being used primarily for
archival purposes, fewer access servers are needed.
- We deploy a collection of 1U servers to service this tier. These systems are use a moderate
amount of RAM and are CPU intensive. As these systems field each incoming API request, we
recommend two high-throughput (10GbE) interfaces. One interface for “front-end” incoming
requests, the other for “back end” access to the object stores to put and fetch data.
Factors to consider:
- For most publicly-facing deployments, or private deployments available across a wide-
reaching corporate network, SSL will be used to encrypting traffic to the client. SSL adds
significant processing load to establish sessions between clients and more capacity in the
access layer will need to be provisioned. SSL may not be required, for private deployments on
a trusted networks.
- Application intensive vs archive oriented. Simply put, the volume of requests will have an
impact on the provisioning of the access tier.
27. Networking
Aggregation Aggregation
Proxy Proxy
Proxy Proxy
Switch Switch
Object Object
Object Object
Wednesday, April 27, 2011
Networking
- An pair of aggregation switches with two links back to the access network / border
network. The aggregation switches connect to two pools of the Access Tier and to each of the
five Zone switches that connect the Object Stores. All connections to the Access Tier and the
Zones are 10GbE.
- Zone Network
-- Each Zone has a switch to connect itself to the aggregation network. We run a single, non-
redundant switch as the system is designed to sustain a Zone failure. Depending on overall
concurrency desired, Cloudscaling will deploy either an 1GbE or a 10GbE network to the
object stores.
- Remember that when you have a write coming into the proxy server, you have 3x going to
the object stores to write the three replicas. Be sure to account for that when figure out the
theoretical limits for read/write traffic. Typically, the expected bandwidth coming in is the
celling.
28. Raw System Costs
Wednesday, April 27, 2011
Raw System Costs:
- TCO caveat: There are many components that are part of the TCO of the entire cluster.
-- Facilities, power, cooling, network, NOC staff
-- Many of those factors are site-specific
29. Raw System Costs
2 Agg Switches
6 Proxy/Auth Servers
~$750,000
1 Petabyte 5 ToR Switches
50 Object Stores $0.75/GB
...and cables, racks, etc
2 ToR Switch
2 Proxy/Auth Servers ~$95,000
120 Terabyte 5 Object Stores $0.79/GB
...and cables, rack, etc
Wednesday, April 27, 2011
- Illustrate hardware pricing as a baseline
- All-in hardware costs (switching, load balancing, storage nodes, optics, cabling, forged
metal for the racks, PDUs)
-- (To note: Amazon's retail pricing for S3 is $0.140 - $0.055)
-- That price is going to go down as hardware prices go down.
30. Understanding TCO
Wednesday, April 27, 2011
- Total-cost of ownership for the cluster should include development costs, hardware and
ongoing costs.
These include:
-- Design/Development
-- Hardware
-- Hardware Standup
-- Datacenter Space
-- Power/Cooling
-- Networking
-- Ongoing Software Maintenance and Upgrades
-- Operational Support
-- Customer Support
31. •Design/Development/Integration
•Hardware
•Hardware Standup
•Datacenter Space
•Power/Cooling
•Network Access
•Ongoing Software Maintenance
•Operational Support
•Customer Support
Understanding TCO
Wednesday, April 27, 2011
- Total-cost of ownership for the cluster should include development costs, hardware and
ongoing costs.
These include:
-- Design/Development
-- Hardware
-- Hardware Standup
-- Datacenter Space
-- Power/Cooling
-- Networking
-- Ongoing Software Maintenance and Upgrades
-- Operational Support
-- Customer Support
32. Planning Checklist
•Product Service Requirements
•Hardware Selection
•Network Design
•Facilities Planning
•Hardware Standup
•Software Provisioning
•System Configuration
•Load Balancing
•Authentication Integration
•Utilization & Billing Integration
•Additional Platform Services
•Monitoring Integration
•Operational Tooling
•Operator Training and Documentation
•Customer Training and Documentation
Wednesday, April 27, 2011
- There are many pieces that need to come together for a successful project. Many groups
that must come together to design, build, deploy, integrate, operate and onboard customers.
Consider the these of activities during your planning phase:
Assemble a cross-functional team as there are many hats that are needed for a successful
standup.
Data center technicians to help plan the power/cooling needed at the DC,
networking experts to help design and plan out the network,
a great software development team to write the integrations needed and fix issues related to
the software systems of the cluster,
Swift is built around common unix tools and folks who are good systems administrator skills
can really help tune a running system.
Product/Sales team who can communicate the value to customers. Who can bring the product
to market.
-- Customer Discovery / Determining Service Requirements
-- Hardware Selection
-- Network Design
-- Facilities Planning
-- Hardware Standup
-- Software Provisioning
-- System Configuration
-- Load Balancing
-- Authentication Integration
-- Utilization & Billing Integration
-- Additional “Value Add” Services
-- Monitoring Development and Integration
-- Operational Tooling
-- Operator Training and Documentation
33. Storage as a Service
Wednesday, April 27, 2011
Yes, you can offer storage as a service.
- don't be 'just storage' offer as a suite of services
- use OpenStack Object Storage with a commodity hardware stack to develop a cost-
competitive product offering
- Put together a cross-functional team. Many roles are needed.
- Get help. Feel free to reach out to us, we've deployed over a 6 of petabytes in several
environments and can help design a solution for your needs.
34. KT ucloud storage service
with openstack object
storage
OpenStack conference
Jinkyung Hwang
KT Cloud Business Unit/PEG
jkhwang@kt.com
ACTION의 실천이 기업문화 혁신을 완성합니다.
Wednesday, April 27, 2011
35. What we did
□ Swift start up at Sept. 2010 and initial build-up with
.1
Chef deploy at Dec. 2010 Aus ti n 1
SAIO --> Swift on multi-servers --> Swift on VM --> Swift with Chef
□ Deployment on KT data center r 1.2 h
Bexa Waut
w i th S
1 peta bytes
□ Customer service & Interworking
portal, cdn interworking, api server, and other cloud services in KT
□ Beta test service from March 2011~
re
lewa r CDN
hundred of customers Midd n fo
o
additi API
with performance testing and system tunings
& Open
Wednesday, April 27, 2011
36. What we did – automatic deployment
□ Swift deployment with Chef
Swift Ready
Hardware
success
Automatic deployment
IP role install
kickstart Kickstart role
url install
OS url (OS) CHEF
OS Image server
MAC OS url,
kickstart url mirror kickstart
Roles per IP
repository file
MAC TFTP
OS url, Kickstart url Auto
IP server per MAC deploy
server
booting DHCP
IP alloc for MAC
server
IPMI
clean
hardware
Wednesday, April 27, 2011
37. What we did - services
* cs: compute service
□ user portal : cs.ucloud.com/ss * ss: storage service
< products >
Wednesday, April 27, 2011
38. What we did – Cyberduck, Gladinet, Cloudfuse IW
Wednesday, April 27, 2011
39. What we did - architecture
□ KT Swift is based on , designed with
□ Currently, interworking with Cloud services of KT and 3rd party
services with API are underway
Portal Swift Cluster
proxies Storage servers
CDN Swift API
S
DR,A
3rd party
tools
repository
Compute
cloud
Monitoring Management
Backend auth &
RDB systems Console
billing systems
Wednesday, April 27, 2011
40. What we did – performance test
□ Internal performance tests are underway with massive loads
□ ‘Advanced’ Swift bench code is used & submitted to launchpad
http://bazaar.launchpad.net/~jkyoung0/+junk/bench_server/files
auth create, delete, authenticate (get url & token), container create, delete, file upload, download
and delete
□ Still Tuning Cluster before Launch
Wednesday, April 27, 2011
41. Issues to solve
□ Tunings for best/optimal performance
seems like disk IO bottlenecks rather than network bandwidth
tunings with system parameters as well as Swift config values are necessary
□ Lookup ID middleware for CDN, API server interworking
kt add a ‘cdn-uri lookup’ and ‘portal-id lookup’ middleware to retrieve Swift URI
with CDN URI, user ID
general lookup middleware is necessary for service interworking
□ Statistics (1.2.0)
seems incorrect values and bugs existed
□ Management & operations tools are necessary
system monitoring and Swift mgmt such as ring re-balancer etc
□ Revision control visibility for commercial services
As a service provider, update is almost-no-down-time is important.
Need more visibility on the upgrade path.
e.g. ubuntu latest v 10.10 vs. ubuntu LTS (long term support) v10.4
Wednesday, April 27, 2011
42. THANK YOU!
감사합니다
April 26, 2010
Joe Arnold, Cloudscaling
Dr. Jinkyung Hwang, KT
Dr. Jaesuk Ahn, KT
Wednesday, April 27, 2011