Re-architecting a system is wrought with problems:
* We almost always underestimate the difficulty of said task
* We almost never completely understand our customers’ needs
* The system almost always changes out from underneath us
However, what we can sometimes forget is the most important aspect of any project: the team! Your team, who miraculously survives these projects, has to continue to change with your business and your technology! But, don’t fret, we can architect our way towards this way of thinking! Let’s figure out how to be resilient and adaptive with the help of each and every team member. Ignore re-architecting your team at your own risk!
The decision and process behind rewriting or re-architecting a system is often plagued with a series of problems: people always underestimate the complexity, people never fully understand the customers, system requirements constantly change out from under them, and, in almost all cases, it takes much longer than anybody can predict. As part of this workshop, we’ll look at a couple of case studies of re-architecture to gleam strategies of success from them as well as common pitfalls to avoid. This workshop should arm you with a framework by which to approach your own decisions around how to manage, maintain, and evolve your own systems:
* understanding the underlying motivations;
* developing a method for deciding whether to evolve or to rewrite;
* managing the engineering effort of re-architecture in the midst of a changing business;
* setting up metrics to understand whether you’re on course; and
* organizing the engineering team and the culture to ensure success
Our talk covers the migration of the Twitter architecture from primarily Ruby on Rails (RoR) to a JVM-based SOA system with emphasis on high performance, scalability, and resilience to failure. General lessons include the advantages of asynchronous, real-time architectures over synchronous, process / thread-oriented systems, as well as caching and data store patterns.
Re-architecting a system is wrought with problems:
* We almost always underestimate the difficulty of said task
* We almost never completely understand our customers’ needs
* The system almost always changes out from underneath us
However, what we can sometimes forget is the most important aspect of any project: the team! Your team, who miraculously survives these projects, has to continue to change with your business and your technology! But, don’t fret, we can architect our way towards this way of thinking! Let’s figure out how to be resilient and adaptive with the help of each and every team member. Ignore re-architecting your team at your own risk!
The decision and process behind rewriting or re-architecting a system is often plagued with a series of problems: people always underestimate the complexity, people never fully understand the customers, system requirements constantly change out from under them, and, in almost all cases, it takes much longer than anybody can predict. As part of this workshop, we’ll look at a couple of case studies of re-architecture to gleam strategies of success from them as well as common pitfalls to avoid. This workshop should arm you with a framework by which to approach your own decisions around how to manage, maintain, and evolve your own systems:
* understanding the underlying motivations;
* developing a method for deciding whether to evolve or to rewrite;
* managing the engineering effort of re-architecture in the midst of a changing business;
* setting up metrics to understand whether you’re on course; and
* organizing the engineering team and the culture to ensure success
Our talk covers the migration of the Twitter architecture from primarily Ruby on Rails (RoR) to a JVM-based SOA system with emphasis on high performance, scalability, and resilience to failure. General lessons include the advantages of asynchronous, real-time architectures over synchronous, process / thread-oriented systems, as well as caching and data store patterns.
http://thinkvitamin.com/events/geolocation-online-conference/
What do users want with geo?
In this session Raffi will be discussing what companies are doing right now with geo and where they are being most successful. He'll also take an in-depth look at the privacy concerns and UI implications, along with what users actually want from geo-enabled apps and where the opportunities lie in the future.
The slides to a tech talk I gave as part of @TwitterU at UC Berkeley on 9 September 2010.
See the blog post at http://mehack.com/twitter-by-the-numbers, and an animated version of the slide deck at http://www.youtube.com/watch?v=TdY0jU697lY
Intro to developing for @twitterapi (updated)Raffi Krikorian
A short primer on how to develop for the Twitter API.
This is the newly edited version of http://www.slideshare.net/raffikrikorian/intro-to-developing-for-twitterapi
How to use Geolocation in your webapp @ FOWA Dublin 2010Raffi Krikorian
Building geolocation into your web app is becoming a necessity for almost everyone these days. It's a complex problem though, so in this session you'll learn from how Twitter is doing this and pick up important lessons for your web app.
Twitter has launched a Geotagging API – we really wanted to enable users to not only talk about “What’s happening?” but also “What’s happening right here?” For a while now, we’ve been watching as users have been trying to geo-tag their tweets through a variety of methods, all of which involve a link to a map service embedded in their Tweet. This talk will delve into how Twitter handles their geocontent including tool suggestions.
As a platform, we’ve tried to make it easier for our users by making location be omnipresent through our platform, and an inherent (but optional) part of a tweet. We’re making the platform be not just about time, but also about place.
Social applications have been venues for people to converse, emote, and share -- and in those applications, "when" has always been inherent and well captured, but the other contextual signal, "where", has been (usually) conspicuously missing. Location, when taken into account, can provide rich signals to help understand social connectivity whilst helping to discover and surface content. Numerous devices and infrastructure services have the ability to expose location, but comprehending how to best make use of these technologies can be complex. Additionally, after the infrastructure is put in place, the next hurdle to overcome is understanding how to create a useable location-based feature that users can comprehend and love while also feeling safe and secure.
This session is targeted to those who want to learn about these technologies, and to those who want to understand how to think about their users' needs, their security, and their privacy. We'll also review web and mobile services that have been designed with location at their core, or location as a feature. And, finally, we'll talk about how Twitter thinks about adding "where" to our "when".
It is clear that the lifestyles of the western world have become unsustainable. Fossil fuel scarcity and global climate change are threatening to cause great economic and environmental damage to the world. Individuals have been looking for ways to understand their contribution to the global and local energy stages, and make better decisions to reduce their impact. WattzOn.com is providing users with an online tool to calculate, track, compare, understand, and budget their personal energy consumption – much in the same way they would manage their finances.
In doing this, WattzOn also strives to innovate on the tools currently available for personal energy tracking. The status quo on personal impact tools involves online “carbon calculators,” which are already ubiquitous on the Internet. However, these calculators suffer from fundamental flaws that prevent them from becoming an effective tool for change. First, they are static and therefore do not react to improvements in knowledge or allow the addition of data. They are also “black box” in operation and do not clearly show a user how their energy numbers were calculated and what assumptions were made. Finally, they have a singular focus on carbon emissions, which doesn’t fully characterize power usage independent of fossil fuels. WattzOn changes this by providing the entire community with a collaborative environment to understand and manipulate how the numbers are calculated, while also shifting to a more comprehensive paradigm by tracking total power usage in watts.
The WattzOn back-end is powered by a pretty unique database nicknamed “holmz” – holmz is a structured wiki-engine that allows people to not only manipulate and share text, but also lets people collectively edit structured data and workflows. With holmz, WattzOn users can debate how much energy goes into, say, growing an apple; separately debate how much energy is needed to transport the apple to a local grocery store; and then have another set of people combine those two together into one result: the energy cost of an apple. Finally, that data propagates through the system, updating the total watts calculated for people with profiles indicating that they eat apples. The crowds can collaborate on getting all the individual parts of the equation correct so that everybody many benefit.
WattzOn is currently a global tool with assumptions made on a national/statewide scale. However, the ultimate goal of the system is to be able to accurately understand the exact power needs of a user’s lifestyle. Given differences in distances from product sources, weather, population density, and transportation options, the impact of any decision will be dependent on the location of the individual. By allowing users to populate the database with information from their own lives (either manually or passively by linking to online bills), the accuracy of the calculations will improve. Once this detailed database exists, anyone will be able to run specialized queries and create clear graphics to illustrate power usage of various communities and groups.
By giving individuals a tool to clearly visualize how they are “spending” energy, we hope that they will take measures to lessen their impact on the world, ultimately spurring widespread energy reduction in our society.
A series of screenshots of a WattzOn kiosk that is coming together now -- part of that kiosk is a "whole earth simulator" to let people play with variables like the energy mix to understand how those effect the world at large.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
http://thinkvitamin.com/events/geolocation-online-conference/
What do users want with geo?
In this session Raffi will be discussing what companies are doing right now with geo and where they are being most successful. He'll also take an in-depth look at the privacy concerns and UI implications, along with what users actually want from geo-enabled apps and where the opportunities lie in the future.
The slides to a tech talk I gave as part of @TwitterU at UC Berkeley on 9 September 2010.
See the blog post at http://mehack.com/twitter-by-the-numbers, and an animated version of the slide deck at http://www.youtube.com/watch?v=TdY0jU697lY
Intro to developing for @twitterapi (updated)Raffi Krikorian
A short primer on how to develop for the Twitter API.
This is the newly edited version of http://www.slideshare.net/raffikrikorian/intro-to-developing-for-twitterapi
How to use Geolocation in your webapp @ FOWA Dublin 2010Raffi Krikorian
Building geolocation into your web app is becoming a necessity for almost everyone these days. It's a complex problem though, so in this session you'll learn from how Twitter is doing this and pick up important lessons for your web app.
Twitter has launched a Geotagging API – we really wanted to enable users to not only talk about “What’s happening?” but also “What’s happening right here?” For a while now, we’ve been watching as users have been trying to geo-tag their tweets through a variety of methods, all of which involve a link to a map service embedded in their Tweet. This talk will delve into how Twitter handles their geocontent including tool suggestions.
As a platform, we’ve tried to make it easier for our users by making location be omnipresent through our platform, and an inherent (but optional) part of a tweet. We’re making the platform be not just about time, but also about place.
Social applications have been venues for people to converse, emote, and share -- and in those applications, "when" has always been inherent and well captured, but the other contextual signal, "where", has been (usually) conspicuously missing. Location, when taken into account, can provide rich signals to help understand social connectivity whilst helping to discover and surface content. Numerous devices and infrastructure services have the ability to expose location, but comprehending how to best make use of these technologies can be complex. Additionally, after the infrastructure is put in place, the next hurdle to overcome is understanding how to create a useable location-based feature that users can comprehend and love while also feeling safe and secure.
This session is targeted to those who want to learn about these technologies, and to those who want to understand how to think about their users' needs, their security, and their privacy. We'll also review web and mobile services that have been designed with location at their core, or location as a feature. And, finally, we'll talk about how Twitter thinks about adding "where" to our "when".
It is clear that the lifestyles of the western world have become unsustainable. Fossil fuel scarcity and global climate change are threatening to cause great economic and environmental damage to the world. Individuals have been looking for ways to understand their contribution to the global and local energy stages, and make better decisions to reduce their impact. WattzOn.com is providing users with an online tool to calculate, track, compare, understand, and budget their personal energy consumption – much in the same way they would manage their finances.
In doing this, WattzOn also strives to innovate on the tools currently available for personal energy tracking. The status quo on personal impact tools involves online “carbon calculators,” which are already ubiquitous on the Internet. However, these calculators suffer from fundamental flaws that prevent them from becoming an effective tool for change. First, they are static and therefore do not react to improvements in knowledge or allow the addition of data. They are also “black box” in operation and do not clearly show a user how their energy numbers were calculated and what assumptions were made. Finally, they have a singular focus on carbon emissions, which doesn’t fully characterize power usage independent of fossil fuels. WattzOn changes this by providing the entire community with a collaborative environment to understand and manipulate how the numbers are calculated, while also shifting to a more comprehensive paradigm by tracking total power usage in watts.
The WattzOn back-end is powered by a pretty unique database nicknamed “holmz” – holmz is a structured wiki-engine that allows people to not only manipulate and share text, but also lets people collectively edit structured data and workflows. With holmz, WattzOn users can debate how much energy goes into, say, growing an apple; separately debate how much energy is needed to transport the apple to a local grocery store; and then have another set of people combine those two together into one result: the energy cost of an apple. Finally, that data propagates through the system, updating the total watts calculated for people with profiles indicating that they eat apples. The crowds can collaborate on getting all the individual parts of the equation correct so that everybody many benefit.
WattzOn is currently a global tool with assumptions made on a national/statewide scale. However, the ultimate goal of the system is to be able to accurately understand the exact power needs of a user’s lifestyle. Given differences in distances from product sources, weather, population density, and transportation options, the impact of any decision will be dependent on the location of the individual. By allowing users to populate the database with information from their own lives (either manually or passively by linking to online bills), the accuracy of the calculations will improve. Once this detailed database exists, anyone will be able to run specialized queries and create clear graphics to illustrate power usage of various communities and groups.
By giving individuals a tool to clearly visualize how they are “spending” energy, we hope that they will take measures to lessen their impact on the world, ultimately spurring widespread energy reduction in our society.
A series of screenshots of a WattzOn kiosk that is coming together now -- part of that kiosk is a "whole earth simulator" to let people play with variables like the energy mix to understand how those effect the world at large.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
* my name is raffi krikorian
* work on the platform team - responsible for the APIs that developers use to call Twitter
* brady coerced me into giving a talk entitled "energy per tweet"
* for a while i’ve been interested in sustainability
* the crux of it is how can we do things better for the planet?
* how can we do things in a way that is not harmful for the planet -- whether this means getting electricity through renewable means, or using things that are recycled
* before we can really talk about energy per tweet, we should talk about some basic physics that i'm sure you've all long forgotten
* there are two basic quantities that i should refresh everybody's knowledge on
* energy and power -- they are intertwined, but their stories are going to flow through the rest of my talk
* energy == work
* there are two ways to think about it
* another way to think about it literally goes back to work. if you were to pick up a kg from the floor and put it on a table, say a meter high, you've done work. you've expended energy. one J of energy to be exact.
* power is the companion to energy in this story
* power is a “Rate” - its something that you keep on spending
* a watt, for example, is the amount of joules (remember the kilogram weight?) that is spent per second.
* if i were to move that weight from the floor to the table in a second, i would have exerted one watt of power
* but i would have to keep on doing that over and over to maintain one watt
* the lightbulb is probably the most common thing where people interact with power
* a 100W lightbulb is expending as much power as me lifting 100 kilograms (200 ish pounds - more than i weigh) up to a one meter high table every single second and continuing to do that. imagine that. its lifting me, just over a meter, every second to keep that lightbulb running.
* my 13" macbook pro has a 60W power adapter. that's less power than an incandescent bulb. you can see why people switch to CFL light bulbs that only use 23Ws to give the same power as 100W incandescent bulb.
* how does this relate to tweeting?
* there are a few things that i want to pull apart, and they are all core to twitter
* of course, a mobile is in twitter's DNA
* as you know, a huge percentage of our traffic comes from mobile devices from all over the world
* let's look at one of these devices, the iPhone
* when i first ordered my iPhone, i anxiously waited for it to show up
* i remember watching fedex to watch this magical device be shipped from shenzhen china, to alaska via fedex, and then from there to me
* it flew across the world for it to show up at my door.... and then for me to throw away a year later when the 3G came out.
* that’s a prime example of globalization
* not only is the device coming from china and being shipped to me, but its components come from all around the world and need to get to china to be put together
* things like the CPU and the video chip need to come in from singapore, and they need to get the silicon from somewhere
* internal circuitry comes from taiwan
* plastics come from oil which probably needs to be shipped in from the middle east
* there is a whole story of the world that is going on in my phone alone
* and just imagine how much energy was spent putting this all together. some estimates put it at 400 megajoules. that’s me lifting a kilogram of weight to a table that is as far away as the moon.
* there is something called the basal metabolic rate (BMR)
* that's the amount of energy expended while at rest - mine is 1800 C - basically, that's how many calories i burn by just sleeping all the time
* a small hummingbird, on the other hand, only consumes about 6 C.
* but, look how small it is! i do something like 20 kilocalories per kilogram of "me", where as the humming bird does about 1600 per kilogram of "it"
* small little dynamo
* so - given how much energy is spent at rest, what if you’re actually doing something?
* get off your butt, take that iphone i talked about, and sent a tweet?
* for somebody like me, it really comes out to about 1 kilocalorie (big C) to type frantically for about a minute. so, let’s say it takes you a minute to compose a tweet. you’re burning about 1 calories to pull that off.
* is that a good weight loss plan?
* sarcastically : *maybe*
* we only let you send about 120 tweets every three hours - so you could burn off about 120 calories every three hours (but you would have to type something new every time)
* a mocha from starbucks puts about 300 calories onto my hips, so if i can just cut those off and tweet more, then i would be good. (of course, don’t forget you eat other stuff throughout the day)
* so, the biggest question, however, is how much electrical energy does it take for twitter to process a tweet?
* just to get it out in front, i’m probably not going to say the actual number, but i’ll give some numbers that are in the ballpark
* where do we use energy? we have to power all the machines in our data center. if you’ve ever been in our war room at operations in the office, we’re power projectors, and screens, computers, VPN links, etc., all to monitor the system. we spend a lot of computing power to smoothly run through our 50 million tweets a day.
* all of this really comes down to our architecture -- which is just a whole bunch of tubes. we have to shuttle tweets around the system in real time.
* its pretty well known that we run ruby and rails (mostly) on our front end servers, and we run mysql as the backend data storage system
* and we have some really great talks tomorrow from people like ryan king, eric jensen, john kalucki, and others talking a lot more about our backend infrastructures. things like how we are changing the servers that are running our systems, how search takes incoming tweets and consumes them, and how we push this data out of our system in real time.
* basically, we have hundreds of computers focused on the problem of ingesting tweets and getting them out again
* thinking about all that gets quickly overwhelming, and if we just want to get a zeroth order estimate -- basically a really rough estimate -- of how much energy, one has to think about the social graph
* fundamentally, twitter has to take an incoming tweet that you send, and put it in the timelines of all the people who are following you.
* the more people who are following you (say, you’re barack obama and have 3.6 million followers), twitter has to make sure that tweet gets delivered to 3.6 million people
* if we were to take a postal system analogy, you give a tweet to a mailman who has to then copy that tweet and stick it into the mailboxes of all your followers
* the more followers you have, the more people we have to deliver that tweet to
* followers are like a power law -- a lot of people have a lot of followers, and a lot of people have a small number of followers. let’s say, on average, that comes out to about 120 people who follow you.
* the experiment is as follows
* we have a development mode for twitter codebase
* we can run the rals up in dev mode
* but we still start up almost all the relevant portions of the application
* can run the entire thing on our developer laptops to get an almost full twitter experience
* run the development mode version of twitter (memcache, database, etc.) all on laptop
* cleanly restarted laptop
* loaded it up with requests and calling /status/update over and over and over
* basically, trying to max out the system on the laptop to see how much we can push off
* really, we’re trying to measure the throughput of the system. we know how much power my laptop is taking, so we just want to pump through the most number of tweets that we can in that time.
* 65W laptop - can push out about a tweet every 1.25 seconds
* multiply those out - its about 90 J / tweet
* google has said that each google search takes about 1 kJ / tweet (10x more expensive)
* good thing - i’m running this in development - our numbers in production are more like - 3x less than that
* going with my 90J / tweet number, that’s something like 0.02 g of CO2 per tweet
* that’s a small number, but small numbers add up - if we put in our 50M per day, then we’re talking 1000 kg of CO2 a day
* if we ran all the tweets through my laptop, that’s one metric ton of CO2 a day
* again, thankfully, we’re better than that