Between Hermeneutics and Deceit: Keeping Natural Language Generation in LineLeah Henrickson
Presented with Dr Albert Meroño-Peñuela at the Digital Humanities Congress (10 September 2022), organised by the University of Sheffield's Digital Humanities Institute. Argues for explicit acknowledgement of hype surrounding AI-driven natural language generation (NLG) systems, using prompt engineering to dispel understandings of language usage as line of thought. Note that the formatting of the slides has been muddled by SlideShare - please download the slides if you wish to see the intended formatting.
Between Hermeneutics and Deceit: Keeping Natural Language Generation in LineLeah Henrickson
Presented with Dr Albert Meroño-Peñuela at the Digital Humanities Congress (10 September 2022), organised by the University of Sheffield's Digital Humanities Institute. Argues for explicit acknowledgement of hype surrounding AI-driven natural language generation (NLG) systems, using prompt engineering to dispel understandings of language usage as line of thought. Note that the formatting of the slides has been muddled by SlideShare - please download the slides if you wish to see the intended formatting.
Slides for a workshop on game design for storytellers. narrative not as core, but as one of the useful components. We explore the game universe, give a short intro to game design, explore the different meaning of narrative in / on / form games, and then try a game design exercise.
Shall We Play a Game? Gaming the System, When the System Is Your Learning Man...Nikki Massaro Kauffman
This session will provide an overview of the world of games and discuss how we envision the adaptation of specific elements in the educational interface. What is it about games that engages and motivates players, and how can we apply this interest to increase learning success in online? We’ll also consider the future of games, and how this developing technology might level up learning as well.
Computational Humor: Can a machine have a sense of humor (2022)Thomas Winters
Can computers have a sense of humor? In this talk, we discuss humor theory, some symbolic humor generation methods and then showcase how prompt engineering can help generate humor automatically.
This talk has been given multiple times by Thomas Winters. This particular version has been personalized for the keynote talk of the postgraduate AI students graduation on the 13th of September 2022.
More information of this talk on https://thomaswinters.be/talk/2022kulak
Virtual Reality, a simulated environment in three dimensions, is not new but emerging technologies and companies like Facebook and Microsoft have recently pushed it back into the spotlight. There is a huge future in VR and meaningful experiences are being developed for it. In this webinar,
~ Discover what Virtual Reality is and gain a brief historical summary of it
~ Understand how VR will change everything ranging from gaming to education
~ Learn about the various products coming out in 2015
~ See how libraries and makerspaces are making use of VR
“What is real? How do you define 'real'? If you're talking about what you can feel, what you can smell, what you can taste and see, then 'real' is simply electrical signals interpreted by your brain.” ~ Morpheus
Talk given at Interactive Narrative Design Think Tank, Nederlands Film Festival September 29, 2019.
Overview:
1. AI for Games/Interactive Narrative
2. Developments, past decade
3. Tech at our finger tips:
Procedural Content Generation
Machine learning
4. Opportunities, Challenges and wish lists
SxSW is a two week festival (not a conference) where the best of tech, film, and music collide. It's the "Spring Break for Geeks," the thing you're supposed to get lost in, the way-less-serious and way-more-fun version of TED. It's the only place where you'd find all of these people in a two-week span:
- Vice President Joe Biden,
- DJ/Producer Chainsmokers,
- Billionaire investor Mark Cuban,
- Elon Musk's brother Kimble Musk
- Producer Rick Ross
- CTO of Pixar Steve May,
- Actor Seth Rogan
Can't wait for next year.
How to stop sucking and be awesome insteadcodinghorror
If you're reading this abstract, you're not awesome enough. Attend this session to unlock the secrets of Jeff Atwood, world famous blogger and industry leading co-founder of Stack Overflow and Stack Exchange. Learn how you too can determine clear goals for your future and turn your dreams into reality through positive-minded conceptualization techniques.* Within six to eight weeks, you'll realize the positive effects of Jeff Atwood's wildly popular Coding Horror blog in your own life, transporting you to an exciting new world of wealth, happiness and political power.
Interviews with well-known game industry vets Niklas Lundberg, John Romero, Ivan-Assen Ivanov with lots of gems of programming advice for students around structure and design of their programs. The advice here should be really valuable for /anyone/ making software.
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
Slides for a workshop on game design for storytellers. narrative not as core, but as one of the useful components. We explore the game universe, give a short intro to game design, explore the different meaning of narrative in / on / form games, and then try a game design exercise.
Shall We Play a Game? Gaming the System, When the System Is Your Learning Man...Nikki Massaro Kauffman
This session will provide an overview of the world of games and discuss how we envision the adaptation of specific elements in the educational interface. What is it about games that engages and motivates players, and how can we apply this interest to increase learning success in online? We’ll also consider the future of games, and how this developing technology might level up learning as well.
Computational Humor: Can a machine have a sense of humor (2022)Thomas Winters
Can computers have a sense of humor? In this talk, we discuss humor theory, some symbolic humor generation methods and then showcase how prompt engineering can help generate humor automatically.
This talk has been given multiple times by Thomas Winters. This particular version has been personalized for the keynote talk of the postgraduate AI students graduation on the 13th of September 2022.
More information of this talk on https://thomaswinters.be/talk/2022kulak
Virtual Reality, a simulated environment in three dimensions, is not new but emerging technologies and companies like Facebook and Microsoft have recently pushed it back into the spotlight. There is a huge future in VR and meaningful experiences are being developed for it. In this webinar,
~ Discover what Virtual Reality is and gain a brief historical summary of it
~ Understand how VR will change everything ranging from gaming to education
~ Learn about the various products coming out in 2015
~ See how libraries and makerspaces are making use of VR
“What is real? How do you define 'real'? If you're talking about what you can feel, what you can smell, what you can taste and see, then 'real' is simply electrical signals interpreted by your brain.” ~ Morpheus
Talk given at Interactive Narrative Design Think Tank, Nederlands Film Festival September 29, 2019.
Overview:
1. AI for Games/Interactive Narrative
2. Developments, past decade
3. Tech at our finger tips:
Procedural Content Generation
Machine learning
4. Opportunities, Challenges and wish lists
SxSW is a two week festival (not a conference) where the best of tech, film, and music collide. It's the "Spring Break for Geeks," the thing you're supposed to get lost in, the way-less-serious and way-more-fun version of TED. It's the only place where you'd find all of these people in a two-week span:
- Vice President Joe Biden,
- DJ/Producer Chainsmokers,
- Billionaire investor Mark Cuban,
- Elon Musk's brother Kimble Musk
- Producer Rick Ross
- CTO of Pixar Steve May,
- Actor Seth Rogan
Can't wait for next year.
How to stop sucking and be awesome insteadcodinghorror
If you're reading this abstract, you're not awesome enough. Attend this session to unlock the secrets of Jeff Atwood, world famous blogger and industry leading co-founder of Stack Overflow and Stack Exchange. Learn how you too can determine clear goals for your future and turn your dreams into reality through positive-minded conceptualization techniques.* Within six to eight weeks, you'll realize the positive effects of Jeff Atwood's wildly popular Coding Horror blog in your own life, transporting you to an exciting new world of wealth, happiness and political power.
Interviews with well-known game industry vets Niklas Lundberg, John Romero, Ivan-Assen Ivanov with lots of gems of programming advice for students around structure and design of their programs. The advice here should be really valuable for /anyone/ making software.
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
Forklift Classes Overview by Intella PartsIntella Parts
Discover the different forklift classes and their specific applications. Learn how to choose the right forklift for your needs to ensure safety, efficiency, and compliance in your operations.
For more technical information, visit our website https://intellaparts.com
NUMERICAL SIMULATIONS OF HEAT AND MASS TRANSFER IN CONDENSING HEAT EXCHANGERS...ssuser7dcef0
Power plants release a large amount of water vapor into the
atmosphere through the stack. The flue gas can be a potential
source for obtaining much needed cooling water for a power
plant. If a power plant could recover and reuse a portion of this
moisture, it could reduce its total cooling water intake
requirement. One of the most practical way to recover water
from flue gas is to use a condensing heat exchanger. The power
plant could also recover latent heat due to condensation as well
as sensible heat due to lowering the flue gas exit temperature.
Additionally, harmful acids released from the stack can be
reduced in a condensing heat exchanger by acid condensation. reduced in a condensing heat exchanger by acid condensation.
Condensation of vapors in flue gas is a complicated
phenomenon since heat and mass transfer of water vapor and
various acids simultaneously occur in the presence of noncondensable
gases such as nitrogen and oxygen. Design of a
condenser depends on the knowledge and understanding of the
heat and mass transfer processes. A computer program for
numerical simulations of water (H2O) and sulfuric acid (H2SO4)
condensation in a flue gas condensing heat exchanger was
developed using MATLAB. Governing equations based on
mass and energy balances for the system were derived to
predict variables such as flue gas exit temperature, cooling
water outlet temperature, mole fraction and condensation rates
of water and sulfuric acid vapors. The equations were solved
using an iterative solution technique with calculations of heat
and mass transfer coefficients and physical properties.
HEAP SORT ILLUSTRATED WITH HEAPIFY, BUILD HEAP FOR DYNAMIC ARRAYS.
Heap sort is a comparison-based sorting technique based on Binary Heap data structure. It is similar to the selection sort where we first find the minimum element and place the minimum element at the beginning. Repeat the same process for the remaining elements.
We have compiled the most important slides from each speaker's presentation. This year’s compilation, available for free, captures the key insights and contributions shared during the DfMAy 2024 conference.
Immunizing Image Classifiers Against Localized Adversary Attacksgerogepatton
This paper addresses the vulnerability of deep learning models, particularly convolutional neural networks
(CNN)s, to adversarial attacks and presents a proactive training technique designed to counter them. We
introduce a novel volumization algorithm, which transforms 2D images into 3D volumetric representations.
When combined with 3D convolution and deep curriculum learning optimization (CLO), itsignificantly improves
the immunity of models against localized universal attacks by up to 40%. We evaluate our proposed approach
using contemporary CNN architectures and the modified Canadian Institute for Advanced Research (CIFAR-10
and CIFAR-100) and ImageNet Large Scale Visual Recognition Challenge (ILSVRC12) datasets, showcasing
accuracy improvements over previous techniques. The results indicate that the combination of the volumetric
input and curriculum learning holds significant promise for mitigating adversarial attacks without necessitating
adversary training.
18. Computers are mostly terrible ✅
You have to play dumb to get anything done✅
War stories are cool
19. “A mental model is an explanation of someone’s
thought process about how something works in
the real world. It is a representation of the
surrounding world, the relationships between its
various parts and a person’s intuitive perception
about his or her own acts and their
consequences.”
Hello!
It’s really cool to be here at such an amazing conference! I really love this room, though the table layout reminds me of a wedding. This is the weirdest wedding I’ve ever been at.
This is a talk about mental models that plausibly sounds like it could be part of a software delivery conference!
It covers some topics already mentioned in other talks, hopefully I won’t contradict them too much.
First I’m going to spoil my own talk. Here’s what you will take away from this talk.
If “computers are terrible” is a surprise, you are probably at the wrong conference, and you weren’t watching Laura’s talk just there.
But before I cover those topics, does anybody remember this?
Charity suggested on twitter that we should do some deployments on stage!
So I’m also going to do a deployment and talk a little bit about why deployments are important to Intercom. Later on I’ll chat through some of the details of our deployment process.
So I’m going to ship something to Intercom live on stage. This is a terrible idea, live demos can go badly and be very tedious to watch!
The deployment will probably work. It will definitely eventually work, our deployment pipeline is generally robust, though it’s not extremely fast and is subject to the occasional wobble.
There’s a chance it may not go through by the end of this talk especially if I get a bout of nerves and
make a mistake!
If it doesn’t work, I promise to keep coming back to this conference to talk every year until I do manage to ship successfully to Intercom during a talk.
Here’s what I’m going to ship. It’s a simple redirect of app.intercom.io/THE SHIP EMOJI to shipitcon.com. You can test it now to see if it works.
After I kick this off, I’ll explain a little about Intercom and why we care so much about shipping software.
I work at Intercom. Our Dublin office is literally next door, I spend a reasonable amount of time every day looking out of our window onto the roof of this building.
Intercom is an Irish software startup who helps online businesses talk to their customers, primarily using a messenger.
Provide software as a service to our customers.
Here’s our messenger. There’s gifs, emojis.
There’s also a backend app which has a lot more going on but doesn’t work as well on giant screens!
Here’s our messenger. There’s gifs, emojis.
There’s also a backend app which has a lot more going on but doesn’t work as well on giant screens!
The messenger space is pretty competitive, there’s a huge opportunity to create a lot of value, and so we have to move fast.
In the R&D team at Intercom we use principles to drive our work.
These allow us to teach, share what we’ve learned and scale how we think about building great product.
They’re a bunch of lessons learned. They’re opinionated, and not simply
A bunch of truisms.
The principles distill how we think about building great product.
Here are 3 of our principles we use in our R&D organisation.
“Ship to learn” is a universal R&D principle. The sooner we ship, the quicker we learn how our product is used. Getting software into users hands lets you understand how its used quickly.
“What you ship is what matters” is a design principle, used by our design team.
Our designers care about what is actually built, delivered and usable by our users, not the artifacts created along the way. The process is important, but the output is the critical part.
Sketch? Sigma.
“Build in small steps” is a direct instruction to our engineers. Make small changes frequently. Break work down into safer, smaller steps.
This doesn’t just refer to changes done via code deploys and pull requests, but allo usual modern “testing in production” techniques such As feature flags. In addition to being iterative and assistive for an agile development process, there are secondary benefits such as helping our availability, quality and again lets us understand what actually happens when we ship what we’re building.
These aren’t universal truisms. It would be reckless of an infrastructure provider to globally or naively apply these. They also imply a bunch of support that you need to do to allow these to happen.
Ok, so getting back on track. Computers are great! They can do things like tell the time, save files, do basic arithmetic, run automated tests, deploy software and talk to each other in large distributed systems just so that we can be all agile and stuff when we want to serve our Uber for cats startup.
The thing is, computers are terrible at all of those things. Or rather, those things sound simple, and obvious but reality is quite difficult. And things get even worse when you have lots of computers attached to a network!
Almost all of the time, what we are working with are things that we have imperfect, incomplete and at times mental models. Almost all of the time, this is ok. It is also utterly essential to get anything done that this remains the case.
So let’s look at a very basic thing that everybody does with computers every day.
Haha, what even is this thing? It’s a 3.5 inch floppy disk from the 90s! But it’s also used as the icon to save data everywhere, even though nobody saves anything to disk these days.
There are a million tweets along these lines, this is not an original joke. Nobody really uses floppy disks any more, we get it. Detachable storage is generally greatly frowned upon, don’t we all use Dropbox etc.?
But not only is this a terrible icon, reliably saving stuff to any type of disk is surprisingly hard.
The problem is that there are no guarantees what happens when the program you’re writing calls the write() system call. The OS, filesystem, disks themselves have multiple intermediate layers that are crucial for a high performing system but make it hard to reason about where your data actually is.
The “documentation” doesn’t help. Here’s an extract from “man mount” on a modern Linux system. The “rumour” is from around 2001, nearly 20 years old.
There are some common strategies to the limitations of filesystems such as renaming files to achieve atomicity. I’ve done this a load of times in simple situations, but when it comes to actually being able to recover the data in all circumstances. But if you’re writing the parts of MySQL or PostgreSQL or a distributed system like S3 with serious data durability requirements, you need to dig deep here and understand the precise behaviour of the different parts that make up a disk (caches, platters etc.) as well as the drivers, filesystems.
For the rest of us? It’s ok to think that when you click the floppy disk and save the file, that the file is saved. It works the vast majority of time and you have to pretend you don’t know what’s happening.
So we’ve shown that computers are terrible and you have to ignore the reality to actually do anything.
This is a good quote, lifted from @copyconstruct’s writeup about mental models.
The mental model you need to have unless you’re a MySQL or kernel developer is “the computer saved my file”. I guess the trick is knowing when to go deeper. When things break, that’s a great time to go deeper!
So here’s a war story from Intercom where we had to rebuild our understanding of something.
Here’s a network diagram of our production setup in the cloud. We’re hosted entirely in Amazon Web Services in us-east-1 (North Virginia).
There are different network subnets in our cloud. We use three. This is where our services live. I’m not really dumbing this down, we try to keep things very simple.
A subnet has different bits of configuration, like the IP addresses it can use.
It also has a routing table that tells the computers where to send the packets.
This routing table is ALSO VERY SIMPLE - a small number of entries.
Dumpster fire, everything’s fine dog.
The same routing table is used across all subnets.
We use TERRAFORM, an infrastructure as code tool, to manage a bunch of our AWS infrastructure including our network setup. routing tables etc. It allows us to define our network in code and it translates this to AWS API calls.
We added NEW SUBNETS but the way we had configured TERRAFORM was that when it was adding subnets, it would REMOVE AND ADD BACK THE ROUTING TABLES WHEN A NEW SUBNET WAS ADDED. I’ll say that again. When we add a new subnet, the routing table was removed from all subnets and then recreated and added back to all the subnets.This made a conceptually simple change into a complex, dangerous change. You wouldn’t do that in the AWS UI
The reason for this was that Terraform’s language is pretty basic and gives very few primitives to program with.
By now you can probably guess where this is going, though trust me it gets better.
We added some new subnets, but got the new IP ranges wrong. They were overlapping with existing ones.
The routing table got DELETED by Terraform, it tried to create the new subnets then boom.
And then it basically gave up after it couldn’t create the new subnets.
Complete network outage of our production cloud environment in AWS for 14 minutes and 57 seconds.
Engineers, mostly in SF, did amazing getting us back to a good state. This was on a Friday evening. I was actually in the pub for my 40th birthday watching all this unfold over Slack. It was very impressive to see our Incident Command and global shared on call kick in. Once we were in a major event mode, of course large amounts of engineers joined to help out. This was great to see, especially from the pub :D
In general, automation is amazing, but sometimes it can really bite you in the ass.
We use OpsWorks, which is basically hosted Chef, to manage our Elasticsearch clusters. Fully documented but not quite well understood feature of AWS’s Opsworks service. Because the hosts weren’t contactable, Opsworks decided to “autoheal” the hosts by moving them to new hardware. All The search data was stored on the nice fast local disks, gone. So basically opsworks auto healed the shit out of our elasticsearch clusters, leaving us with 10 wonderfully empty clusters.
Built a list of known automation, examined cloud watch logs, looked for areas where errors could result in damage or there weren’t safeguards from destroying production infrastrucutre. “single bullet gun”
This is incomplete - we weren’t formally proving what is our there, but an audit of what we can work with easily is a good start.
So we’ve shown that computers are terrible and you have to ignore the reality to actually do anything.
Since we’ve been looking at that deployment, and this is a conference about delivering software I’m going to show off some of the more interesting parts of our testing process.
Sometimes we change something in our environment that is not backwards compatible. Like, we upgrade a database or something. This check gives us the ability to force developers to a minimum version so that we don’t annoy them. We’ll tell everybody to rebase. Happens infrequently enough, but a good way to avoid doing a lot of work at times. Also we work off of trunk!!!
This is from our Docker file. We don’t use Docker in production, but it’s useful to share test artifacts. We use a vendored copy of a Docker base image from CircleCI (who we don’t use for Intercom) based off Ruby 2.5.5 (which we have long since migrated off of) to install a “stretch” Docker image (we don’t use Debian in production, we use Amazon Linux, which is CentOS based). Why do we vendor this? Because upstream changes have broken our deployment pipeline, so we’d rather control this. That’s a lesson learned!
We use a vendored copy of a Docker base image from CircleCI (who we don’t use for Intercom) based off Ruby 2.5.5 (which we have long since migrated off of) to install a “stretch” Docker image (we don’t use Debian in production, we use Amazon Linux, which is CentOS based). Why do we vendor this? Because upstream changes have broken our deployment pipeline, so we’d rather control this. That’s a lesson learned!
We use a vendored copy of a Docker base image from CircleCI (who we don’t use for Intercom) based off Ruby 2.5.5 (which we have long since migrated off of) to install a “stretch” Docker image (we don’t use Debian in production, we use Amazon Linux, which is CentOS based). Why do we vendor this? Because upstream changes have broken our deployment pipeline, so we’d rather control this. That’s a lesson learned!
We use a vendored copy of a Docker base image from CircleCI (who we don’t use for Intercom) based off Ruby 2.5.5 (which we have long since migrated off of) to install a “stretch” Docker image (we don’t use Debian in production, we use Amazon Linux, which is CentOS based). Why do we vendor this? Because upstream changes have broken our deployment pipeline, so we’d rather control this. That’s a lesson learned!
Here we just install a vendored version of MongoDB Enterprise server (we don’t use enterprise server in production) for some reason at the end of a shell chain that installs a vendored version of OpenSSL. Something to do with the move from jessie. “It works”
Here’s libeatmydata
Whirlworld tour through Docker
Some Mongo confi, some installing MySQL from scratch (We run AWS’s RDS Aurora in production), some Redis stable (we use ElastiCache Redis in production)
And some Ruby stuff!
Next, we load the schema from a cache… no I mean the schema cache from a… cache…
In our modern CI/CD environment, a “green” build has passed all its tests and is therefore safe to ship to production. A quick look under the covers shows that this is often far from the case, for example non-deterministic tests being covered up by retries.
Also when we deploy our monolith to over a thousand servers, it doesn’t happen at once. At any moment there might be many software versions running, even on the same host. But for all but a bunch of narrow edge cases, this is totally fine.
My point is here that we’ve got a reliable, battle hardened complex build and deploy system. To ship code at Intercom, you’re better off not knowing about any of this stuff! It’s another example of having to act dumb to get your job done. The abstraction here is critical.
So we’ve shown that computers are terrible and you have to ignore the reality to actually do anything. Updating your knowledge of what’s going on is useful when you realise that what you’re working with doesn’t work the way you expect.
Once you realise there’s value in updating your model of the world, what can you do about it?
For example, my understanding of the main drivers of Intercom’s AWS cloud costs and how they are influenced by how we autoscale isn’t something that can be solved with writing some debugging statements . Analyse data, ingest it into analytical tools, build hypotheses and make decisions on the basis of what you now think you know.
There are some great materials on a bunch of the topics I’ve covered here that go into a lot more detail and probably ten times more articulate.
Copy Construct’s blog post that I already mentioned, Tanya Reilly’s “Nobody could have predicted this”, Dan Luu’s talk from Deconstruct which goes into a lot of detail about how reliably saving data is very hard. I’ll tweet out links and the slide deck after the talk!
The backend looks like this. It looks similar enough to
So unlike the proverbial frog in boiling water, we did notice some problems.
There’s my twitter handle, I hope you enjoyed the talk. Thanks for listening!