The document discusses session state in distributed web applications. It describes how session state can be stored on the client, server, or database. Storing state on the client limits scalability but is simplest, while storing in a database improves scalability but can become a bottleneck. The document also discusses design patterns for microservices including loose coupling, high cohesion, and bounded contexts. Services should be loosely coupled and have high cohesion to group related functionality together.
Með tilkomu vefsins og nýrra lausna í skýinu, hafa kröfur til vefkerfa breyst mikið. Nú þarf að meðhöndla marga notendur og stundum vera undir miklu álagi. Það kemst í fréttirnar þegar vinsælir vefir hrynja undan álagi. En hvernig búum við til launsir sem þola álag. Í þessu fyrirlestri skoðum við leiðir til að skala og þau hugtök sem tengjast því.
One of the most critical design decisions on enterprise programming is where to keep the state. As we talked about in the lecture on Concurrency, session state is the state that is maintained between requests. A session starts when the user first hits the enterprise system, and lasts until the user signs out or times out. In this lecture we look at the session state and explore three design patterns on where to store the session state.
The second topic in this lecture is how to distribution the applications. The primary reason we want to do that is to get more performance and handle more load. Most enterprise applications have lots of users, some hundreds of thousands. The only way to cope with such load is to scale the application. Scalability is how much more load an application can take if more resources are added. We will look at two ways to scale, one is by load balancing and the other by clustering.
Video of this lecture are found here:
http://www.olafurandri.com/?page_id=2762
Moving On Up - smaller servers and bigger performanceDoug Lucy
Presentation to annual Progress user conference comparing price and performance of x86-based Linux servers with proprietary Unix servers from HP, Sun and IBM
Tech Talk Series, Part 3: Why is your CFO right to demand you scale down MySQL?Clustrix
Many web businesses enjoy a spike in traffic at some point in the year. Whether it's Black Friday, the NFL draft day, or Mother’s Day, your app needs to be able to scale and capture customer value when it is most needed. Downtime is not an option.
For a database, that means having enough capacity to ensure transaction latency stays within acceptable limits. For high capacity apps using MySQL, this means you may need to deploy triple the normal capacity usage to sustain traffic for one day. But what do you do with that hardware for the rest of the year? Do you leave it idling? That unused capacity is costing you an arm and a leg, and wasted expenses make CFOs grumpy.
In Part 3 of our Tech Talk series, we discuss what the options are for scaling down MySQL, as well as explore answers to the following questions:
- How do I figure out the costs of not scaling down?
- How does ClustrixDB scale-down differently than MySQL?
- How real is elastically scaling in ClustrixDB? What are the catches?
View the webcast of this Tech Talk on our YouTube channel.
Með tilkomu vefsins og nýrra lausna í skýinu, hafa kröfur til vefkerfa breyst mikið. Nú þarf að meðhöndla marga notendur og stundum vera undir miklu álagi. Það kemst í fréttirnar þegar vinsælir vefir hrynja undan álagi. En hvernig búum við til launsir sem þola álag. Í þessu fyrirlestri skoðum við leiðir til að skala og þau hugtök sem tengjast því.
One of the most critical design decisions on enterprise programming is where to keep the state. As we talked about in the lecture on Concurrency, session state is the state that is maintained between requests. A session starts when the user first hits the enterprise system, and lasts until the user signs out or times out. In this lecture we look at the session state and explore three design patterns on where to store the session state.
The second topic in this lecture is how to distribution the applications. The primary reason we want to do that is to get more performance and handle more load. Most enterprise applications have lots of users, some hundreds of thousands. The only way to cope with such load is to scale the application. Scalability is how much more load an application can take if more resources are added. We will look at two ways to scale, one is by load balancing and the other by clustering.
Video of this lecture are found here:
http://www.olafurandri.com/?page_id=2762
Moving On Up - smaller servers and bigger performanceDoug Lucy
Presentation to annual Progress user conference comparing price and performance of x86-based Linux servers with proprietary Unix servers from HP, Sun and IBM
Tech Talk Series, Part 3: Why is your CFO right to demand you scale down MySQL?Clustrix
Many web businesses enjoy a spike in traffic at some point in the year. Whether it's Black Friday, the NFL draft day, or Mother’s Day, your app needs to be able to scale and capture customer value when it is most needed. Downtime is not an option.
For a database, that means having enough capacity to ensure transaction latency stays within acceptable limits. For high capacity apps using MySQL, this means you may need to deploy triple the normal capacity usage to sustain traffic for one day. But what do you do with that hardware for the rest of the year? Do you leave it idling? That unused capacity is costing you an arm and a leg, and wasted expenses make CFOs grumpy.
In Part 3 of our Tech Talk series, we discuss what the options are for scaling down MySQL, as well as explore answers to the following questions:
- How do I figure out the costs of not scaling down?
- How does ClustrixDB scale-down differently than MySQL?
- How real is elastically scaling in ClustrixDB? What are the catches?
View the webcast of this Tech Talk on our YouTube channel.
1049: Best and Worst Practices for Deploying IBM Connections - IBM Connect 2016panagenda
Depending on deployment size, operating system and security considerations you have different options to configure IBM Connections. This session show good and bad examples on how to do it from multiple customer deployments. Christoph Stoettner describes things he found and how you can optimize your systems. Main topics include simple (documented) tasks that should be applied, missing documentation, automated user synchronization, TDI solutions and user synchronization, performance tuning, security optimizing and planning Single Sign On for mail, IBM Sametime and SPNEGO. This is valuable information that will help you to be successful in your next IBM Connections deployment project.
A presentation from Christoph Stoettner (panagenda).
1693: 21 Ways to Make Your Data Work for You - IBM Connect 2016panagenda
Your collaboration infrastructure contains a gold mine of information just waiting to get used. Francie Tanner and Henning Kunz cover a rich variety of collaboration topics such as cloud readiness, onboarding, social adoption, the Notes Browser Plugin and more. Learn from 21 real world companies and how they tackled their next collaboration move by diving into their very own data sets.
A presentation from Francie Tanner (panagenda) and Henning Kunz (panagenda).
Apache Web Performance - Leveraging Apache to make your site FLY!
Apache is the most popular web server in the world, yet its default configuration can't handle high traffic. Learn how to setup Apache for high performance sites and leverage many of its available modules to deliver a faster web experience for your users. Discover how Apache can max out a 1 Gbps NIC and how to serve over 140,000 pages per minute with a small Apache cluster. Get happier users, more conversions, and save money with a properly setup Apache web server.
YOUR machine and MY database - a performing relationship!?Martin Klier
Martin Klier - http://www.performing-databases.com
“YOUR machine and MY database - a performing relationship!?” is intended to be an information for Oracle DBAs, DB developers and system administrators who want to learn more about how databases, operating systems and hardware works together.
Databases affect machines, machines affect databases. Optimizing one is pointless without knowing the other. System administrators and database administrators will not necessarily have the same opinion - often because they know little about the opposite's needs. This lecture was made to promote understanding - showing how the database can stress the server, and how the server can limit the database. And why two admins sometimes don't speak the same language, not even with a developer as an interpreter.
• Recall the different needs of different technical layers underneath a database system.
• Understand the technical collaboration of hardware, operating system and database.
• Plot ways how to avoid collisions, competition and concurrency.
• Promote collaboration!
This white paper and its presentation were written in late 2013 and early 2014 from scratch for IOUG forum at COLLABORATE 14.
Hadoop Institutes in Bangalore: Kelly Technologies is the best Hadoop Training Institute in Bangalore and providing Hadoop Training classes by real-time faculty with course material and 24x7 Lab Facility.
Database as a Service on the Oracle Database Appliance PlatformMaris Elsins
Speaker: Marc Fielding, Co-speaker: Maris Elsins.
Oracle Database Appliance provides a robust, highly-available, cost-effective, and surprisingly scalable platform for database as a service environment. By leveraging Oracle Enterprise Manager's self-service features, databases can be provisioned on a self-service basis to a cluster of Oracle Database Appliance machines. Discover how multiple ODA devices can be managed together to provide both high availability and incremental, cost-effective scalability. Hear real-world lessons learned from successful database consolidation implementations.
Building a Scalable Architecture for web appsDirecti Group
Visit http://wiki.directi.com/x/LwAj for the video. This is a presentation I delivered at the Great Indian Developer Summit 2008. It covers a wide-array of topics and a plethora of lessons we have learnt (some the hard way) over the last 9 years in building web apps that are used by millions of users serving billions of page views every month. Topics and Techniques include Vertical scaling, Horizontal Scaling, Vertical Partitioning, Horizontal Partitioning, Loose Coupling, Caching, Clustering, Reverse Proxying and more.
You can watch the replay for this Geek Sync webcast, Successfully Migrating Existing Databases to Azure SQL Database, on the IDERA Resource Center, http://ow.ly/k4p050A4rBA.
First impressions have long-lasting effects. When dealing with an architecture change like migrating to Azure SQL Database the last thing you want to do is leave a bad first impression by having an unsuccessful migration. In this session, you will learn the difference between Azure SQL Database, SQL Managed Instances, and Elastic Pools. How to use tools to test migrations for compatibility issues before you start the migration process. You will learn how to successfully migrate your database schema and data to the cloud. Finally, you will learn how to determine which performance tier is a good starting point for your existing workload(s) and how to monitor your workload over time to make sure your users have a great experience while you save as much money as possible.
Speaker: John Sterrett is an MCSE: Data Platform, Principal Consultant and the Founder of Procure SQL LLC. John has presented at many community events, including Microsoft Ignite, PASS Member Summit, SQLRally, 24 Hours of PASS, SQLSaturdays, PASS Chapters, and Virtual Chapter meetings. John is a leader of the Austin SQL Server User Group and the founder of the HADR Virtual Chapter.
Docker 101 for Oracle DBAs - Oracle OpenWorld 2017Adeesh Fulay
SUN5617- Docker 101 for Oracle DBAs
Linux Container (not to be confused with Oracle Container Cloud Service), like Docker and LxC, is a next-generation virtualization technology. Imagine having all the benefits of a hypervisor-based virtual machine but with no performance overhead. It’s this combination that makes containers ideal for databases, especially when running on bare metal. While the adoption of containers has been steadily increasing for many applications and databases, the Oracle community at large has been fairly sluggish. In this session bring your laptop along and practice basic docker commands.
2012 Product Portfolio - Visual Aide - For our Tobacco Products.
Including Chewing Tobacco, Snus, Cigars and Cigarillos.
Latin America Product Portfolio.
1049: Best and Worst Practices for Deploying IBM Connections - IBM Connect 2016panagenda
Depending on deployment size, operating system and security considerations you have different options to configure IBM Connections. This session show good and bad examples on how to do it from multiple customer deployments. Christoph Stoettner describes things he found and how you can optimize your systems. Main topics include simple (documented) tasks that should be applied, missing documentation, automated user synchronization, TDI solutions and user synchronization, performance tuning, security optimizing and planning Single Sign On for mail, IBM Sametime and SPNEGO. This is valuable information that will help you to be successful in your next IBM Connections deployment project.
A presentation from Christoph Stoettner (panagenda).
1693: 21 Ways to Make Your Data Work for You - IBM Connect 2016panagenda
Your collaboration infrastructure contains a gold mine of information just waiting to get used. Francie Tanner and Henning Kunz cover a rich variety of collaboration topics such as cloud readiness, onboarding, social adoption, the Notes Browser Plugin and more. Learn from 21 real world companies and how they tackled their next collaboration move by diving into their very own data sets.
A presentation from Francie Tanner (panagenda) and Henning Kunz (panagenda).
Apache Web Performance - Leveraging Apache to make your site FLY!
Apache is the most popular web server in the world, yet its default configuration can't handle high traffic. Learn how to setup Apache for high performance sites and leverage many of its available modules to deliver a faster web experience for your users. Discover how Apache can max out a 1 Gbps NIC and how to serve over 140,000 pages per minute with a small Apache cluster. Get happier users, more conversions, and save money with a properly setup Apache web server.
YOUR machine and MY database - a performing relationship!?Martin Klier
Martin Klier - http://www.performing-databases.com
“YOUR machine and MY database - a performing relationship!?” is intended to be an information for Oracle DBAs, DB developers and system administrators who want to learn more about how databases, operating systems and hardware works together.
Databases affect machines, machines affect databases. Optimizing one is pointless without knowing the other. System administrators and database administrators will not necessarily have the same opinion - often because they know little about the opposite's needs. This lecture was made to promote understanding - showing how the database can stress the server, and how the server can limit the database. And why two admins sometimes don't speak the same language, not even with a developer as an interpreter.
• Recall the different needs of different technical layers underneath a database system.
• Understand the technical collaboration of hardware, operating system and database.
• Plot ways how to avoid collisions, competition and concurrency.
• Promote collaboration!
This white paper and its presentation were written in late 2013 and early 2014 from scratch for IOUG forum at COLLABORATE 14.
Hadoop Institutes in Bangalore: Kelly Technologies is the best Hadoop Training Institute in Bangalore and providing Hadoop Training classes by real-time faculty with course material and 24x7 Lab Facility.
Database as a Service on the Oracle Database Appliance PlatformMaris Elsins
Speaker: Marc Fielding, Co-speaker: Maris Elsins.
Oracle Database Appliance provides a robust, highly-available, cost-effective, and surprisingly scalable platform for database as a service environment. By leveraging Oracle Enterprise Manager's self-service features, databases can be provisioned on a self-service basis to a cluster of Oracle Database Appliance machines. Discover how multiple ODA devices can be managed together to provide both high availability and incremental, cost-effective scalability. Hear real-world lessons learned from successful database consolidation implementations.
Building a Scalable Architecture for web appsDirecti Group
Visit http://wiki.directi.com/x/LwAj for the video. This is a presentation I delivered at the Great Indian Developer Summit 2008. It covers a wide-array of topics and a plethora of lessons we have learnt (some the hard way) over the last 9 years in building web apps that are used by millions of users serving billions of page views every month. Topics and Techniques include Vertical scaling, Horizontal Scaling, Vertical Partitioning, Horizontal Partitioning, Loose Coupling, Caching, Clustering, Reverse Proxying and more.
You can watch the replay for this Geek Sync webcast, Successfully Migrating Existing Databases to Azure SQL Database, on the IDERA Resource Center, http://ow.ly/k4p050A4rBA.
First impressions have long-lasting effects. When dealing with an architecture change like migrating to Azure SQL Database the last thing you want to do is leave a bad first impression by having an unsuccessful migration. In this session, you will learn the difference between Azure SQL Database, SQL Managed Instances, and Elastic Pools. How to use tools to test migrations for compatibility issues before you start the migration process. You will learn how to successfully migrate your database schema and data to the cloud. Finally, you will learn how to determine which performance tier is a good starting point for your existing workload(s) and how to monitor your workload over time to make sure your users have a great experience while you save as much money as possible.
Speaker: John Sterrett is an MCSE: Data Platform, Principal Consultant and the Founder of Procure SQL LLC. John has presented at many community events, including Microsoft Ignite, PASS Member Summit, SQLRally, 24 Hours of PASS, SQLSaturdays, PASS Chapters, and Virtual Chapter meetings. John is a leader of the Austin SQL Server User Group and the founder of the HADR Virtual Chapter.
Docker 101 for Oracle DBAs - Oracle OpenWorld 2017Adeesh Fulay
SUN5617- Docker 101 for Oracle DBAs
Linux Container (not to be confused with Oracle Container Cloud Service), like Docker and LxC, is a next-generation virtualization technology. Imagine having all the benefits of a hypervisor-based virtual machine but with no performance overhead. It’s this combination that makes containers ideal for databases, especially when running on bare metal. While the adoption of containers has been steadily increasing for many applications and databases, the Oracle community at large has been fairly sluggish. In this session bring your laptop along and practice basic docker commands.
2012 Product Portfolio - Visual Aide - For our Tobacco Products.
Including Chewing Tobacco, Snus, Cigars and Cigarillos.
Latin America Product Portfolio.
The ENGAGE Learning portal and tools were presented at this workshop and knowledge will be provided on the step by step introduction of game-based learning. The tools will support workshop participants in how to select, modify, design and adopt games for their own classes, regarding their local and cultural agendas. Selected use cases of game-based learning were presented and explained. The workshop was carried out interleaving presentations, demonstrations, discussions and group work.
How to match the blistering evolution
of social media with effective internal and
external social technology strategies.
While progressive companies are tying themselves in million-dollar knots just building Facebook apps or chasing the latest Twitter-marketing strategy, Perficient proposes that firms take a more holistic view:
The most popular social technologies did not even exist eight years ago, so the trick is not in deciding which ones deserve your money or man-hours.
The trick is learning how to anticipate and leverage trends in human interaction in ways that will keep your business responsive, agile and synched with the ever-shifting DNA of social media evolution.
The trick to mastering social media is this:
It’s not the software. It’s the culture.
During the “Architecting for the Cloud” breakfast seminar where we discussed the requirements of modern cloud-based applications and how to overcome the confinement of traditional on-premises infrastructure.
We heard from data management practitioners and cloud strategists about how organizations are meeting the challenges associated with building new or migrating existing applications to the cloud.
Finally, we discussed how the right cloud-based architecture can:
- Handle rapid user growth by adding new servers on demand
- Provide high performance even in the face of heavy application usage
- Offer around-the-clock resiliency and uptime
- Provide easy and fast access across multiple geographies
- Deliver cloud-enabled apps in public, private, or hybrid cloud environments
Understanding System Design and Architecture Blueprints of EfficiencyKnoldus Inc.
This exploration delves into the intricate world of system design and architecture, dissecting the fundamental principles and methodologies that underpin the creation of robust and scalable systems. From the conceptualization of software structures to the deployment of hardware components, this comprehensive study navigates through the critical decisions and considerations that engineers face when crafting efficient and reliable systems. Gain insights into best practices, design patterns, and emerging trends that shape the backbone of modern technology, empowering you to engineer solutions that stand the test of time. Whether you're a seasoned architect or an aspiring designer, embark on a journey to master the art and science of system design and architecture.
Introduction and Basics to web technology .pptxLEENASAHU42
Introduction: Web system architecture- 1,2,3 and n tier
architecture, URL, domain name system, overview of
HTTP , Web Site Design Issues and Introduction to role of
SEO (Search Engine Optimization) on web page
development.
Software Architecture for Cloud InfrastructureTapio Rautonen
Distributed systems are hard to build. Software architecture must be carefully crafted to suit cloud infrastructure.
Design for failure. Learn from failure. Adopt new cloud compatible design patterns and follow the guidelines during the journey of building cloud native applications.
Caching for Microservices Architectures: Session IVMware Tanzu
In this 60 minute webinar, we will cover the key areas of consideration for data layer decisions in a microservices architecture, and how a caching layer, satisfies these requirements. You’ll walk away from this webinar with a better understanding of the following concepts:
- How microservices are easy to scale up and down, so both the service layer and the data layer need to support this elasticity.
- Why microservices simplify and accelerate the software delivery lifecycle by splitting up effort into smaller isolated pieces that autonomous teams can work on independently. Event-driven systems promote autonomy.
- Where microservices can be distributed across availability zones and data centers for addressing performance and availability requirements. Similarly, the data layer should support this distribution of workload.
- How microservices can be part of an evolution that includes your legacy applications. Similarly, the data layer must accommodate this graceful on-ramp to microservices.
Presenter : Jagdish Mirani is a Product Marketing Manager in charge of Pivotal’s in-memory products
Cloud Architecture Tutorial - Running in the Cloud (3of3)Adrian Cockcroft
Part 3 of the talk covers how to transition to cloud, how to bootstrap developers, how to run cloud services including Cassandra, capacity planning and workload analysis, and organizational structure
*What is DBMS
*Database System Applications
*The Evolution of a Database
*Drawbacks of File Management System / Purpose of Database Systems
*Advantages of DBMS
*Disadvantages of DBMS
*DBMS Architecture
*types of modules
*Three-Tier and n-Tier Architectures for Web Applications
*different level and types
*Data Abstraction
*Data Independence
*Database State or Snapshot
*Database Schema vs. Database State
*Categories of data models
*Different Users
*Database Languages
*Relational Model
*ER Model
*Object-based model
*Semi-structured data model
Make your first CloudStack Cloud successfulTim Mackey
As presented at the 2014 CloudStack Collaboration Conference in Denver (CCCNA14), this deck covers some of the decision points impacting a successful deployment of CloudStack within your organization. Critical elements such as storage and networking are discussed to create a blueprint which seeks to remove some of the learning curve associated with the transition from data center management to cloud management.
Fyrirlestur fyrir Félag tölvunarfræðinga og Verkfræðingafélagið þann 18.05.2022
Nýsköpun er forsenda tækniframfara sem eru forsendur framþróunar. Nýsköpun byrjar yfirleitt smátt og þarf margar ítranir til að virka. Frumkvöðlar sem eru að búa til nýjungar þurfa ekki einungis að glíma við tæknina og takmarkanir hennar, heldur einnig skoðanir og álit samtímamanna sem sjá ekki alltaf tilgang með nýrri tækni. Í þessum fyrirlestri skoðar Ólafur Andri nýsköpun og þær framfarir sem hafa orðið. Einnig skoðar hann hvert tækniframfarir nútímans muni leiða okkur á komandi árum.
Ólafur Andri Ragnarsson er aðjúnkt við Háskólann í Reykjavík og kennir þar námskeið um tækniþróun og hvernig tæknibreytingar hafa áhrif á fyrirtæki. Hann er tölvunarfræðingur (Msc) að mennt frá Oregon University í Bandaríkjanum. Ólafur Andri er frumkvöðull og stofnaði, ásamt fleirum, Margmiðlun og síðar Betware. Þá tók Ólafur Andri þátt í að stofna leikjafyrirtækið Raw Fury AB í Stokkhólmi.
Fyrirlestur haldinn fyrir tæknifaghóp Stjórnvísi þann 13. október 2020.
Undanfarna áratugi höfum við séð gríðalegar framfarir í tækni og nýsköpun á heimsvísu. Þessar framfarir hafa skapað mannkyninu öllu aukna hagsæld. Þrátt fyrir veirufaraldur á heimsvísu eru framfarir ekkert að minnka heldur munu bara aukast næstu árum. Gervgreind, róbotar, sýndarveruleiki, hlutanetið og margt fleira er að búa til nýjar lausnir og ný tækifæri. Framtíðin er í senn sveipuð dulúð og getur verið spennandi og ógnvekjandi í senn. Eina sem við vitum fyrir vissu er að framtíðin verður alltaf betri. Í þessu fyrirlestri ætlar Ólafur Andri Ragnarsson kennari við HR að fjalla um nýjustu tækni og framtíðina.
Technology is one of the factors of change. When new disruptive technology is introduced, it can change industries. We have many examples of that and will start this journey it one of the most important innovation that has come in our lifetimes, the smartphone. We will explore the impact of the smartphone and the fate of existing companies at the time when iPhone, the first smartphone as we know them, was introduced to the world.
We will also look at other examples from history. Then we look at the broader picture, past industrial revolutions and the one that we are experiencing now, the fourth industrial revolution. Specifically we look briefly at the technologies that fuel this revolution, for example artificial intelligence, robotics, drones, internet of things and more.
Manlike machines have fascinated humans since ancient times. The modern robots start to take shape with the industrial revolution. In the 20th century robots were mostly industrial machines you would see in factories, like car factories.
Today, robots can have sensors, vision, they can hear and understand. They can connect to the cloud for more information. However, we are still in the early stages of robotics and robots will need to go a long way to become useful as a ubiquitous general purpose devices.
The normal interaction with computers is with keyboard and a mouse. For display a rectangular somewhat small screen is used with 2D windowing systems. The mouse was invented more the 40 years ago and has been for 20 years dominant input. Now we are seeing new types of input devices. Multi-touch adds new dimensions and new applications. Natural user interfaces or gesture interfaces where people point to drag objects. Computers are also beginning to recognize facial expressions of people, so it knows if you are smiling. Voice and natural language understanding is getting to a usable stage. All this calls all types of new applications.
Displays are getting bigger. What if any surface was a screen? If you could spray the wall with screen? Or have you phone project images to the wall.
This lectures explores some of these new types of interactions with computers and software. It makes the old mouse look old.
Local is the Lo in SoLoMo, the buzz word. Local is not only about location, it's also about your digital track record. Over 70% of Netflix users watch the films recommend. Mining data to understand people's behaviour is getting to be a huge and valuable business. Advertisers see opportunities in getting direct to their target groups. Predictive intelligence is also about where you will be at some time in the future, and where somebody you know will be.
It turns out that Facebook and Google know you better than you think you know yourself. The world is about to get really scary.
Over two billion people signed up for Facebook. This site the most used site for people when using the Internet. People are not watching TV so much anymore - they using Facebook, Youtube and Netflix and number of popular web sites.
Some people denote their time working for others online. What drives people to write an article on Wikipedia? They don´t get paid. Companies are enlisting people to help with innovations and sites such as Galaxy Zoo ask people to help identifying images. And why do people have to film themselves singing when they cannot sing and post the video on Youtube?
In this lecture we talk about how people are using the web to interact in new ways, and doing stuff.
With the computer revolution vast amount of digital data has become available. With the Internet and smart connected product, the data is growing exponentially. It is estimated that every year, more data is generated than all history prior. And this has repeated over several years.
With all this data, it becomes a platform for something new of its own. In this lecture, we look at what big data is and look at several examples of how to use data. There are many well-know algorithms to analyse data, like clustering and machine learning.
After the computing industry got started, a new problem quickly emerged. How do you operate this machines and how to you program them. The development of operating systems was relatively slow compared to the advances in hardware. First system were primitive but slowly got better as demand for computing power increased. The ideas of the Graphical User Interfaces or GUI (Gooey) go back to Doug Engelbarts Demo of the Century. However, this did not have much impact on the computer industry. One company though, Xerox, a photocopy company explored these ideas with Palo Alto Park. Steve Jobs of Apple and Bill Gates of Microsoft took notice and Apple introduced first Apple Lisa and the Macintosh.
In this lecture on we look so lessons for the development of software, and see how our business theories apply.
In this lecture on we look so lessons for the development of algorithms or software, and see how our business theories apply.
In the second part we look at where software is going, namely Artificial Intelligence. Resent developments in AI are causing an AI boom and new AI application are coming all the time. We look at machine learning and deep learning to get an understanding of the current trends.
We are currently living in times of great transformation. We have over the last couple of decade seen the Internet become the most powerful disrupting force in the world, connecting everyone and transforming businesses. Now everyday objects - things we use are getting smart with sensors and software. And they are connecting. What does this mean?
We will see the world become alive. Cars will talk to road sensors that talk to systems that guide traffic. Plants will talk to weather systems that talk to scientists that research climate change. Farming fields will talk to the farming system that talks to robots that do fertilising and harvesting. Home appliances like refrigerators, ovens, coffee machines and microwaves ovens will talk to the home food and cooking system that will inform the store that you are running out butter, cheese, laundry detergent and coffee beans, which will inform the robot driver to get this to your house after consulting your calendar upon when someone is at home.
In this lecture we explore the Internet of Things, IoT.
The Internet grew out of US efforts to build the ARPANET, a network of peer computers built during the cold war. The two major players were military and academia. The network was simple and required no efforts for security or social responsibility. The early Internet community was mainly highly educated and respectable scientist. In the early 1990s the World Wide Web, a hypertext system is introduced, and soon browsers start to appear, leading the commercialization of Net. New businesses emerge and a technology boom known as the dot-com era.
The network, now over 40, is being stretched. Problems such as spam, viruses, antisocial behaviour, and demands for more content are prompting reinvention of the Net and threatening its neutrality. Add to this government efforts to regulate and limit the network.
In this lecture we look at the Internet and the impact of the network. We will also look at the future of the Internet.
The Internet grew out of US efforts to build the ARPANET, a network of peer computers built during the cold war. The two major players were military and academia. The network was simple and required no efforts for security or social responsibility. The early Internet community was mainly highly educated and respectable scientist. In the early 1990s the World Wide Web, a hypertext system is introduced, and soon browsers start to appear, leading the commercialisation of Net. New businesses emerge and a technology boom known as the dot-com era.
The network, now over 40, is being stretched. Problems such as spam, viruses, antisocial behaviour, and demands for more content are prompting reinvention of the Net and threatening its neutrality. Add to this government efforts to regulate and limit the network.
In this lecture we look at the Internet and the impact of the network. We will also look at the future of the Internet.
The ideas for cellular phones were developed in the 1940s. However, it was not until the microprocessor becomes available that practical commercial solutions are possible.
Today there are more than 5 billion unique mobile phone subscriptions in the world and of them about 2.5 billion are smartphones. This device is so powerful that people check it over 40 times a day.
In this lecture we look mobile. We also look at the history of communication since the telegraph and how the mobile market developed in the 80s and 90s until the iPhone was released in 2007. That same year Western Union stopped sending telegraph messages.
Did you know that the term "Computer" once meant a profession? And what did people or computers actually do? They computed mathematical problems. Some problems were tedious and error prone. And it is not surprising that people started to develop machines to aid in the effort. The first mechanical computers were actually created to get rid of errors in human computation. Then came tabulating machines and cash registers. It was not until telephone companies were well established that computing machines became practical.
First computers were huge mainframes, but soon minicomputers like DEC’s PDP started to appear. The transistor was introduced in 1947, but its usefulness was not truly realized until in 1958 when the integrated circuit was invented. This led to the invention of the microprocessor. Intel, in 1971, marketed the 4004 – and the personal computer revolution started. One of the first Personal Computers was MITS’ Altair. This was a simple device and soon others saw the opportunities.
In this lecture we start our coverage of computing and look at some of the early machines and the impact they had.
Software is changing the way traditional business operate. People now have smartphones in their pockets - a supercomputer that is 25,000 times more powerful and the minicomputers of the 1960s. This is changing people's behaviour and how people shop and use services. The organisational structure created in the 20th century cannot survive when new digital solution are being offered. Software is changing the way traditional business operate. People now have smartphones in their pockets - a supercomputer that is 25,000 times more powerful and the minicomputers of the 1960s. This is changing people's behaviour and how people shop and use services. The organisational structure created in the 20th century cannot survive when new digital solution are being offered. The hierarchical structure of these established companies assumes high coordination cost due to human activity. But when the coordination cost drops
The organisational structure that companies in the 20th century established was based on the fact that employees needed to do all the work. The coordination cost was high due to the effort and cost of employees, housing etc. Now we have software that can do this for use and the coordination cost drops to close-to-zero. Another thing is that things become free. Consider Flickr. Anybody can sign up and use the service for free. Only a fraction of the users get pro account and pay. How can Flickr make money on that? It turns out that services like this can.
Many businesses make money by giving things away. How can that possibly work? The music business has suffered severely with digital distribution of content. Should musicians put all their songs on YouTube? What is the future business model for music?
One of the great irony of successful companies is how easily they can fail. New companies are founded to take advantage of some new technology. They become highly successful and but when the technology shifts, something new comes along, they are unable to adapt and fail. This is the innovator’s dilemma.
Then there are companies that manage to survive. For example, Kodak survived two platform shift, only til fail the third. IBM has survived over 100 years. What do successful companies do differently?
History has many examples of great innovators who had difficult time convincing their contemporaries of new technology. Even incumbent and powerful companies regarded new technologies as inferior and dismissed it as "toys". Then when disruptive technologies take off they often are overhyped and can cause bubbles like the Internet bubble of the late 1990s.
In this lecture we look at some examples of disruptive technologies and the impact they had. We look at the The Disruptive Innovation Theory by Harvard Professor Clayton Christensen.
Technology evolves in big waves that we call revolutions. The first revolution was the Industrial revolution that started in Britain in 1771. Since than we have see more revolutions come and how we are in the fifth. These revolutions follow a similar path. First there is an installation period where the new technologies are installed and deployed, creating wealth to those who were are the right place at the right time. This is followed by a frenzy, where financial markets wants to be apart. The there is crash and turning point, followed by synergy, a golden age.
In 1908, a new technological revolution started. It was the Age of Oil and Automobile. The technology trigger was Henry Ford´s new assembly line technique that allowed the manufacturing of standardized, low cost automobile. This created the car industry and other manufacturing companies. This also created demand for gas thus creating the oil industry. During the Roaring Twenties the stock prices rose to new levels, until a crash and the Great Depression. Only after World War II, came a turnaround point followed by a golden age in the post-war boom.
In this lecture we look at a framework for understanding technological revolutions. There revolutions completely change societies and replace the old with new technologies. We will explore how these revolutions take place. We should now be in the golden age phase.
We also look at generations.
In the early days of product development, the technology is inferior and lacking in performance. The focus is very much on the technology itself. The users are enthusiast who like the idea of the product, find use for it, and except the lack of performance. Then as the product becomes more mature, other factors become important, such as price, design, features, portability. The product moves from being a technology to become a consumer item, and even a community.
In this lecture we explore the change from technology focus to consumer focus, and look at why people stand in line overnight to buy the latest gadgets.
In the early days of product development, the technology is inferior and lacking in performance. The focus is very much on the technology itself. The users are enthusiast who like the idea of the product, find use for it, and except the lack of performance. Then as the product becomes more mature, other factors become important, such as price, design, features, portability. The product moves from being a technology to become a consumer item, and even a community.
In this lecture we explore the change from technology focus to consumer focus, and look at why people stand in line overnight to buy the latest gadgets.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
2. Agenda
▪ Evolution - where are we today?
▪ Requirements of 21st century web applications
▪ Session State
▪ Distribution Strategies
▪ Scale Cube
▪ Eventual Consistency
– CAP Theorm
▪ Real World Example
3. Evolution
60s 70s 80s 90s 00s
IBM
Mainframes
Limited
layering or
abstraction
IBM, DEC
Mini-
computers
Unix, VAX
“Dumb”
terminals
Screens/Files
PC, Intel,
DOS, Mac,
Unix,
Windows
Client/Server
RMDB
Windows
Internet
HTTP
Web
Browsers
Web
Applications
RMDB
Windows,
Linux
MacOS
Browsers,
Services
Domain
Applications
RMDB
4. Evolution
60s 70s 80s 90s 00s
IBM
nframes
mited
ering or
traction
IBM, DEC
Mini-
computers
Unix, VAX
“Dumb”
terminals
Screens/Files
PC, Intel,
DOS, Mac,
Unix,
Windows
Client/Server
RMDB
Windows
Internet
HTTP
Web
Browsers
Web
Applications
RMDB
Windows,
Linux
MacOS
Browsers,
Services
Domain
Applications
RMDB
iOS
Android
HTML5
Browsers
Apps
API
Cloud
NoSQL
10s
5. Motivation
▪ Requirements of 21st century web systems
– High availability
– Millions of simultaneous users
– Peak load of 1000s tx/sec
▪ Example
– What if we need to handle load of 20.000 tx/sec?
– That’s 1.2 million tx per minute
7. Business Transactions
▪ Transactions that expand more than one request
– User is working with data before they are committed to the database
• Example: User logs in, puts products in a shopping cart, buys, and
logs out
– Where do we keep the state between transactions?
Login
Catalog
search
List of
results
Select
products
put into
cart
Buy
cart
8. State
▪ Server with state vs. stateless server
– Stateful server must keep the state between requests
▪ Problem with stateful servers
– Need more resources, limit scalability
Client 1
Client 2
Client 3
Stateful Server Stateless Server
Client 1
Client 2
Client 3
Data 1
Data 2
Data 2
9. Stateless Servers
▪ Stateless servers scale much better
▪ Use fewer resources
▪ Example:
– View book information
– Each request is separate
▪ REST was designed to be stateless
10. Stateful Servers
▪ Stateful servers are the norm
▪ Not easy to get rid of them
▪ Problem: they take resources and cause server affinity
▪ Example:
– 100 users make request every 10 second, each request takes 1
second
– One stateful object per user
– Object are Idle 90% of the time
11. Session State
▪ State that is relevant to a session
– State used in business transactions and belong to a specific client
– Data structure belonging to a client
– May not be consistent until they are persisted
▪ Session is distinct from record data
– Record data is a long-term persistent data in a database
– Session state might en up as record data
13. Ways to Store Session State
▪ We have three players
– The client using a web browser or app
– The Server running the web application and domain
– The database storing all the data
Client Server Database
14. Ways to Store Session State
▪ Three basic choices
– Client Session State
– Server Session State
– Database Session State
Client Server Database
15. Client Session State
Store session state on the client
▪ How It Works
– Desktop applications can store the state in memory
– Web solutions can store state in cookies, hide it in the web page, or
use the URL
– Data Transfer Object can be used
– Session ID is the minimum client state
– Works well with REST - Representational State Transfer
16. Client Session State
▪ When to Use It
– Works well if server is stateless
– Maximal clustering and failover resiliency
▪ Drawbacks
– Does not work well for large amount of data
– Data gets lost if client crashes
– Security issues
17. Server Session State
Store session state on a server in a
serialised form
▪ How It Works
– Session Objects – data structures on the server keyed to session Id
▪ Format of data
– Can be binary, objects or XML
▪ Where to store session
– Memory, application server, file or local or in-memory database
18. Server Session State
▪ Specific Implementations
– HttpSession
– Stateful Session Beans – EJB
▪ When to Use It
– Simplicity, it is easy to store and receive data
▪ Drawbacks
– Data can get lost if server goes down
– Clustering and session migration becomes difficult
– Space complexity (memory of server)
– Inactive sessions need to be cleaned up
19. Database Session State
Store session data as committed data in the database
▪ How It Works
– Session State stored in the database
– Can be stored as temporary data to distinguish from committed
record data
▪ Pending session data
– Pending session data might violate integrity rules
– Use of pending field or pending tables
• When pending session data becomes record data it is save in the
real tables
20. Database Session State
▪ When to Use It
– Improved scalability – easy to add servers
– Works well in clusters
– Data is persisted, even if data centre goes down
▪ Drawbacks
– Database becomes a bottleneck
– Need of clean up procedure of pending data that did not become
record data – user just left
21. What about dead sessions?
▪ Client session
– Not our problem
▪ Server session
– Web servers will send inactive message upon timeout
▪ Database session
– Need to be clean up
– Retention routines
22. Caching
▪ Caching is temporary data that is kept in memory between requests
for performance reasons
– Not session data
– Can be thrown away and retrieved any time
▪ Saves the round-trip to the database
▪ Can become stale or old and out-dated
– Distributed caching (message driven cache) is one way to solve that
23. Practical Example
▪ Client session
– For preferences,
user selections
▪ Server session
– Used for browsing and
caching
– Logged in customer
▪ Database
– “Legal” session
– Stored, trackable, need to survive between sessions
27. Distributed Architecture
▪ Distribute processing by placing objects on different nodes
▪ Benefits
– Load is distributed between different nodes giving overall better
performance
– It is easy to add new nodes
– Middleware products make calls between nodes transparent
But is this true?
28. Distributed Architecture
▪ Distribute processing by placing objects different nodes
“This design sucks like an inverted hurricane” – Fowler
Fowler’s First Law of Distributed Object Design: Don't Distribute your
objects!
29. Remote and Local Interfaces
▪ Local calls
– Calls between components on the same node are local
▪ Remote calls
– Calls between components on different machines are remote
▪ Objects Oriented programming
– Promotes fine-grained objects
30. Remote and Local Interfaces
▪ Local call within a process is very, very fast
▪ Remote call between two processes is order-of-magnitude s l o w e r
– Marshalling and un-marshalling of objects
– Data transfer over the network
▪ With fine-grained object oriented design, remote components can kill
performance
▪ Example
– Address object has get and set method for each member, city,
street, and so on
– Will result in many remote calls
31. Remote and Local Interfaces
▪ With distributed architectures, interfaces must be course-grained
– Minimising remote function calls
▪ Service Architecture has to have course-grained APIs and combine
several objects
– Avoid fine-grained interfaces
▪ Example
– Instead of having getters and setters for each field, bulk assessors
are used
32. Distributed Architecture
▪ Better distribution model (X scaling)
– Load Balancing or Clustering the application involves putting
several copies of the same application on different nodes
Order
Application
Order
Application
Order
Application
Order
Application
33. Where You Have to Distribute
▪ As architect, try to eliminate as many remote call as possible
– If this cannot be archived choose carefully where the distribution
boundaries lay
▪ Distribution Boundaries
– Client/Server
– Server/Database
– Web Server/Application Server
– Separation due to vendor differences
– There might be some genuine reason
34. Optimizing Remote Calls
▪ We know remote calls are expensive
▪ How can we minimize the cost of remote calls?
▪ The overhead is
– Marshaling or serializing data
– Network transfer
▪ Put as enough data into the call
– Course grained call
– Use binary protocols – avoid XML
36. Term microservices is sometimes used, but is misleading
Has nothing to do with lines of code
How big is a service?
Example definition:
Balance between integration points and size
Time: Can be rewritten in one iteration (2 weeks)
Features: All things that belong together
37. Loose Coupling
When services are loosely coupled, a change in one
service should not require a change in another
A loosely coupled service knows as little about the
services with which it collaborates
Source: Building Microservices
38. High Cohesion
We want related behaviour to sit together, and unrelated
to sit elsewhere
Group together stuff the belongs together, as in SRP
If you want to change something, it should change in one
place, as in DRY
Source: Building Microservices
39. Bounded Context
Concept that comes from Domain-driven Design (DDD)
Any given domain contains multiple bounded contexts,
and within each are “models” or “things” (or “objects”)
that do not need to be communicated outside
that are shared with other bounded contexts
The shared objects are define the explicit interface to the
bounded context
Source: Building Microservices
41. The Right Balance
▪ In Service Architecture, we want to split by functionality (Y Scaling)
– Boundaries must be well designed – objects that work together are
grouped together
– APIs must be sufficiently course grained
43. Scaling the application
▪ Today’s web sites must handle multiple simulations users
▪ Examples:
– All web based apps must handle several users
– mbl.is handles >200.000 users/day
– Betware must handle up to 100.000 simultaneous users and 1,2
million tx/min for terminal system peak load
44.
45. The World we Live in
▪ Average number of tweets per day 500 million
▪ Total number of minutes spent on Facebook each month
700 billion
▪ SnapChat has 100 million daily active users who send 1
billion snaps each day
▪ Instagram has over 200 million users on the platform
who send 60 million photos per day
▪ Number of messages sent by WhatsApp: 30 billion
46. Scalability
▪ Scalability is the ability of a system, network, or process to handle a
growing amount of work in a capable manner or its ability to be
enlarged to accommodate that growth
▪ With more load, how does the load of the system vary?
47. Scalability
▪ Scalability is the measure of how adding resource (usually hardware)
affects the performance
– Vertical scalability (up) – increase server power
– Horizontal scalability (out) – increase the servers
▪ Session migration
– Move the session for one server to another
▪ Server affinity
– Keep the session on one server and make the client always use the
same server
49. Scaling Applications
In the Internet world you want to build web
sites that gets lots of users and massive
hit per second
But how can you cope with such load?
Browser
HTTP
Server
Application Database
50. The Scaling Problem
▪ We need to handle number of request to our system
▪ There are two ways to scale:
– Vertically or scale up:Add more capacity to your hardware, more memory
for example
– Horizontal or scale out:Add more machines
51. Scaling Up
▪ This is the traditional approach for many monolithic systems
▪ Use a big powerful system
▪ Pros:
– Easy to do, easy to understand
– One memory space and one database
▪ Cons:
– Has very hard limits
– Does not work for the 21st century requirements
52. Scaling Out (X scaling)
▪ This can work for monolithic systems if the database requirements is
not high
▪ Use a many machines and distribute the load
– Have one big powerful database
▪ Pros:
– Scales well – handles much more load
– Shared database
▪ Cons:
– Session management is a challenge
– Database is a bottleneck
53. Scale Cube
X scaling: duplicate the system
Z
scaling:Partition
the
data
Yscaling:PartitiontheApplication
54. Load Distribution
▪ Use number of machines to handle requests
▪ Load Balancer directs all
request to particular server
– All requests in one session go
to the same server
– Server affinity
▪ Benefits
– Load can be increased
– Easy to add new pairs
– Uptime is increased
▪ Drawbacks
– Database is a bootleneck
55. Clustering
▪ With clustering, servers
are connected together
as they were a single
computer
– Request can be handled
by any server
– Sessions are stored on
multiple servers
– Servers can be added and
removed any time
▪ Problem is with state
– State in application servers reduces scalability
– Clients become dependant on particular nodes
56. Clustering State
▪ Application functionality
– Handle it yourself, but this is complicated, not worth the effort
▪ Shared resources
– Well-known pattern (Database Session State)
– Problem with bottlenecks limits scalablity
▪ Clustering Middleware
– Several solutions, for example JBoss, Terracotta
▪ Clustering JVM or network
– Low levels, transparent to applications
60. Amdahl’s Law
▪ This law is used to find the maximum expected improvement to an
overall system when only part of the system is improved
▪ In parallel computing, it states that a small portion of the program
which cannot be parallelized will limit the overall speed-up available
from parallelization
61. Amdahl’s Law
▪ Amdahl’s law for overall speedup
1
Overall speedup =
F
(1 – F) +
S
F = The fraction enhanced
S = The speedup of the enhanced fraction
If we make 20% of the program be 10x faster
F=0.2
S=10
1
overall speedup =
0.2
(1 – 0.2) +
10
Gives 1.22 in overall speedup
IF S = 1000, overall speedup is 1.25
62. Amdahl’s Corollary
▪ Make the common case fast
– Common case being defined as “most time consuming”
40% 10x faster => 1.5625
20% 100x faster => 1.2468
63. The Optimization Process
▪ There is only one way to test scalability: Measure
– Find the bottleneck (the common case)
– Hypothesize about improvement
– Make optimization – change only one thing a time
– Measure again and repeat
65. Transactions
▪ Transaction is a bounded sequence of work
– Both start and finish is well defined
– Transaction must complete on an all-or-nothing basis
▪ All resources are in consistent state before and after the transaction
▪ Example: Database transaction
– Withdraw data from account
– Buy the product
– Update stock information
▪ Transactions must have ACID properties
66. ACID properties
▪ Atomicity
– All steps are completed successfully – or rolled back
▪ Consistency
– Data is consistent at the start and the end of the transaction
▪ Isolation
– Transaction is not visible to any other until that transaction commits
successfully
▪ Durability
– Any results of a committed transaction must be made permanent
67. Transactional Resources
▪ Anything that is transactional
– Use transaction to control concurrency
– Databases, printers, message queues
▪ Transaction must be as short as possible
– Provides greatest throughput
– Should not span multiple requests
– Long transactions span multiple request
68. Transaction Isolations and Liveness
▪ Transactions lock tables (or resources)
– Need to provide isolation to guarantee correctness
– Liveness suffers
– We need to control isolation
▪ Serializable Transactions
– Full isolation
– Transactions are executed serially, one after the other
– Benefits: Guarantees correctness
– Drawbacks: Can seriously damage liveness and performance
69. Isolation Level
▪ Problems can be controlled by setting the isolation level
– We don’t want to lock table since it reduces performance
– Solution is to use as low isolation as possible while keeping
correctness
70. Problem
▪ Serialization crates scalability bottlenecks
▪ Applications that support fully secure serialization of using RMDB
have hard time with scale
▪ Can we scarify something?
– Can we relax these requirements?
71. CAP Theorem
▪ States that it is impossible for a distributed computer system to
simultaneously provide all three of the following guarantees:
– Consistency: all nodes see the same data at the same time
– Availability: a guarantee that every request receives a response
about whether it was successful or failed
– Partition tolerance: the system continues to operate despite
arbitrary message loss or failure of part of the system
72.
73. ACID vs. BASE
▪ BASE: Basically Available, Soft state, Eventual consistency
▪ Basically Available: Guarantees availability of the database
▪ Soft state: The state of the system can change over time - even without
input.
▪ Eventual consistency: The system will eventually become consistent
over time given no new input
74. ACID vs. BASE
▪ The difference has more to do with synchronous and asynchronous
messaging
▪ For large scale systems asynchronous caters for the fastest and least
restricted workflow
76. Measuring Scalability
▪ The only meaningful way to know about system’s performance is to
measure it
▪ Performance Tools can help this process
– Give indication of scalability
– Identify bottlenecks
79. Summary
▪ Requirements of 21st century web applications
– Availability, Eventual consistency
▪ Session State
– Client, Server, Database
▪ Distribution Strategies
– Don’t distribute fine grained object – identify bouneries
▪ The Scale Cube
▪ Eventual Consistency
– CAP Theorm
▪ Real World Example