This document summarizes Jeremy Zawodny's work with MySQL and search at Craigslist. It discusses how Craigslist uses MySQL for its classified listings but encountered scaling issues as traffic grew. To address this, Craigslist implemented the Sphinx search engine, which improved performance and allowed them to reduce their MySQL cluster size. The document also outlines Craigslist's data archiving strategy using eventual consistency and their goals for further optimizing their database and search infrastructure.
NewSQL overview:
- History of RDBMs
- The reasons why NoSQL concept appeared
- Why NoSQL was not enough, the necessity of NewSQL
- Characteristics of NewSQL
- 7 DBs that belongs to NewSQL
- Overview Table with main properties
JPoint'15 Mom, I so wish Hibernate for my NoSQL database...Alexey Zinoviev
Alexey Zinoviev presented this paper on the JPoint'15 conference javapoint.ru/talks/#zinoviev.
This paper covers next topics: Java, JPA, Morphia, Hibernate OGM, Spring Data, Hector, Kundera, NoSQL, Mongo, Cassandra, HBase, Riak
Conceptos básicos. Seminario web 6: Despliegue de producciónMongoDB
Este es el último seminario web de la serie Conceptos básicos, en la que se realiza una introducción a la base de datos MongoDB. En este seminario web le guiaremos por el despliegue en producción.
These are the slides I presented at the Nosql Night in Boston on Nov 4, 2014. The slides were adapted from a presentation given by Steve Francia in 2011. Original slide deck can be found here:
http://spf13.com/presentation/mongodb-sort-conference-2011
NYJavaSIG - Big Data Microservices w/ SpeedmentSpeedment, Inc.
JAVA MICROSERVICES FOR BIG DATA WITH LOW LATENCY - Per-Ake Minborg, CTO Speedment
By leveraging on memory mapped files (eg. Hazelcast, ChronicleMaps etc.), Speedment supports large Java Maps that easily can exceed the size of your server’s RAM. Because the Java Maps are mapped onto files, these maps can be shared instantly between several microservice JVMs and new microservice instances can be added, removed or restarted very quickly. Data can be retrieved with predictable ultra-low latency for a wide range of operations. The solution can be synchronized with an underlying database so that your in-memory maps will be consistently “alive”. The mapped files can be terabytes which has been done in real world deployment cases and there can be a large number of microservices that shares these maps simultaneously.
NewSQL overview:
- History of RDBMs
- The reasons why NoSQL concept appeared
- Why NoSQL was not enough, the necessity of NewSQL
- Characteristics of NewSQL
- 7 DBs that belongs to NewSQL
- Overview Table with main properties
JPoint'15 Mom, I so wish Hibernate for my NoSQL database...Alexey Zinoviev
Alexey Zinoviev presented this paper on the JPoint'15 conference javapoint.ru/talks/#zinoviev.
This paper covers next topics: Java, JPA, Morphia, Hibernate OGM, Spring Data, Hector, Kundera, NoSQL, Mongo, Cassandra, HBase, Riak
Conceptos básicos. Seminario web 6: Despliegue de producciónMongoDB
Este es el último seminario web de la serie Conceptos básicos, en la que se realiza una introducción a la base de datos MongoDB. En este seminario web le guiaremos por el despliegue en producción.
These are the slides I presented at the Nosql Night in Boston on Nov 4, 2014. The slides were adapted from a presentation given by Steve Francia in 2011. Original slide deck can be found here:
http://spf13.com/presentation/mongodb-sort-conference-2011
NYJavaSIG - Big Data Microservices w/ SpeedmentSpeedment, Inc.
JAVA MICROSERVICES FOR BIG DATA WITH LOW LATENCY - Per-Ake Minborg, CTO Speedment
By leveraging on memory mapped files (eg. Hazelcast, ChronicleMaps etc.), Speedment supports large Java Maps that easily can exceed the size of your server’s RAM. Because the Java Maps are mapped onto files, these maps can be shared instantly between several microservice JVMs and new microservice instances can be added, removed or restarted very quickly. Data can be retrieved with predictable ultra-low latency for a wide range of operations. The solution can be synchronized with an underlying database so that your in-memory maps will be consistently “alive”. The mapped files can be terabytes which has been done in real world deployment cases and there can be a large number of microservices that shares these maps simultaneously.
Video and slides synchronized, mp3 and slide download available at URL http://bit.ly/1aNvLOQ.
Mike Krieger discusses Instagram's best and worst infrastructure decisions, building and deploying scalable and extensible services. Filmed at qconsf.com.
Mike Krieger (@mikeyk) graduated from Stanford University where he studied Symbolic Systems with a focus in Human-Computer Interaction. During his undergrad, he interned at Microsoft's PowerPoint team and after graduating, he worked at Meebo for a year and a half as a user experience designer and as a front-end engineer before joining the Instagram team doing design & development.
MongoDB for Coder Training (Coding Serbia 2013)Uwe Printz
Slides of my MongoDB Training given at Coding Serbia Conference on 18.10.2013
Agenda:
1. Introduction to NoSQL & MongoDB
2. Data manipulation: Learn how to CRUD with MongoDB
3. Indexing: Speed up your queries with MongoDB
4. MapReduce: Data aggregation with MongoDB
5. Aggregation Framework: Data aggregation done the MongoDB way
6. Replication: High Availability with MongoDB
7. Sharding: Scaling with MongoDB
Has your app taken off? Are you thinking about scaling? MongoDB makes it easy to horizontally scale out with built-in automatic sharding, but did you know that sharding isn't the only way to achieve scale with MongoDB?
In this webinar, we'll review three different ways to achieve scale with MongoDB. We'll cover how you can optimize your application design and configure your storage to achieve scale, as well as the basics of horizontal scaling. You'll walk away with a thorough understanding of options to scale your MongoDB application.
Presented at JavaOne 2013, Tuesday September 24.
"Data Modeling Patterns" co-created with Ian Robinson.
"Pitfalls and Anti-Patterns" created by Ian Robinson.
Percona Cluster ( Galera ) is one of the best database solution that provides synchronous replication. The feature like automatic recovery, GTID and multi threaded replication makes it powerful along with ( XtraDB and Xtrabackup ).
The good solution for MySQL HA.
A talk given at JDay Lviv 2015 in Ukraine; originally developed by Yoav Abrahami, and based on the works of Kyle "Aphyr" Kingsbury:
Consistency, availability and partition tolerance: these seemingly innocuous concepts have been giving engineers and researchers of distributed systems headaches for over 15 years. But despite how important they are to the design and architecture of modern software, they are still poorly understood by many engineers.
This session covers the definition and practical ramifications of the CAP theorem; you may think that this has nothing to do with you because you "don't work on distributed systems", or possibly that it doesn't matter because you "run over a local network." Yet even traditional enterprise CRUD applications must obey the laws of physics, which are exactly what the CAP theorem describes. Know the rules of the game and they'll serve you well, or ignore them at your own peril...
Better encryption & security with MariaDB 10.1 & MySQL 5.7Colin Charles
Talking about the improvements in MariaDB on MySQL security and encryption features that are so important in today's data landscape. Presented http://www.meetup.com/EffectiveMySQL/events/224828891/
Video and slides synchronized, mp3 and slide download available at URL http://bit.ly/1aNvLOQ.
Mike Krieger discusses Instagram's best and worst infrastructure decisions, building and deploying scalable and extensible services. Filmed at qconsf.com.
Mike Krieger (@mikeyk) graduated from Stanford University where he studied Symbolic Systems with a focus in Human-Computer Interaction. During his undergrad, he interned at Microsoft's PowerPoint team and after graduating, he worked at Meebo for a year and a half as a user experience designer and as a front-end engineer before joining the Instagram team doing design & development.
MongoDB for Coder Training (Coding Serbia 2013)Uwe Printz
Slides of my MongoDB Training given at Coding Serbia Conference on 18.10.2013
Agenda:
1. Introduction to NoSQL & MongoDB
2. Data manipulation: Learn how to CRUD with MongoDB
3. Indexing: Speed up your queries with MongoDB
4. MapReduce: Data aggregation with MongoDB
5. Aggregation Framework: Data aggregation done the MongoDB way
6. Replication: High Availability with MongoDB
7. Sharding: Scaling with MongoDB
Has your app taken off? Are you thinking about scaling? MongoDB makes it easy to horizontally scale out with built-in automatic sharding, but did you know that sharding isn't the only way to achieve scale with MongoDB?
In this webinar, we'll review three different ways to achieve scale with MongoDB. We'll cover how you can optimize your application design and configure your storage to achieve scale, as well as the basics of horizontal scaling. You'll walk away with a thorough understanding of options to scale your MongoDB application.
Presented at JavaOne 2013, Tuesday September 24.
"Data Modeling Patterns" co-created with Ian Robinson.
"Pitfalls and Anti-Patterns" created by Ian Robinson.
Percona Cluster ( Galera ) is one of the best database solution that provides synchronous replication. The feature like automatic recovery, GTID and multi threaded replication makes it powerful along with ( XtraDB and Xtrabackup ).
The good solution for MySQL HA.
A talk given at JDay Lviv 2015 in Ukraine; originally developed by Yoav Abrahami, and based on the works of Kyle "Aphyr" Kingsbury:
Consistency, availability and partition tolerance: these seemingly innocuous concepts have been giving engineers and researchers of distributed systems headaches for over 15 years. But despite how important they are to the design and architecture of modern software, they are still poorly understood by many engineers.
This session covers the definition and practical ramifications of the CAP theorem; you may think that this has nothing to do with you because you "don't work on distributed systems", or possibly that it doesn't matter because you "run over a local network." Yet even traditional enterprise CRUD applications must obey the laws of physics, which are exactly what the CAP theorem describes. Know the rules of the game and they'll serve you well, or ignore them at your own peril...
Better encryption & security with MariaDB 10.1 & MySQL 5.7Colin Charles
Talking about the improvements in MariaDB on MySQL security and encryption features that are so important in today's data landscape. Presented http://www.meetup.com/EffectiveMySQL/events/224828891/
MongoDB has taken a clear lead in adoption among the new generation of databases, including the enormous variety of NoSQL offerings. A key reason for this lead has been a unique combination of agility and scalability. Agility provides business units with a quick start and flexibility to maintain development velocity, despite changing data and requirements. Scalability maintains that flexibility while providing fast, interactive performance as data volume and usage increase. We'll address the key organizational, operational, and engineering considerations to ensure that agility and scalability stay aligned at increasing scale, from small development instances to web-scale applications. We will also survey some key examples of highly-scaled customer applications of MongoDB.
LuSql: (Quickly and easily) Getting your data from your DBMS into Luceneeby
Need to move your data from your DBMS to Lucene? The recently released LuSql allows you to do this in a single line. LuSql is a high performance, low use barrier application for getting DBMS data into Lucene. This presentation will introduce LuSql, and give a brief tutorial on simple to crazy complicated use cases, including per document sub-queries and out-of-band document transformations. -- Glen Newton, CISTI, National Research Council
NoSQL databases like MongoDB, Elasticsearch, and Cassandra are synonymous with scalability, search, and developer agility. But there’s a downside...having to give up the ease and comfort of SQL.
Or do you?
Join this webcast to learn how the newest databases, like CrateDB and CockroachDB deliver the benefits of NoSQL with the ease of SQL by building SQL engines on top of custom NoSQL technology stacks. Database industry veteran Andy Ellicott, who helped launch Vertica, VoltDB, Cloudant, and now with Crate.io, will provide a no-BS view of current DBMS architectures and predictions for the future of data.
If you’re a DBMS user, this webcast will help you make sense of a very crowded DBMS market and make better-informed decisions for your new tech stacks.
Large-scale projects development (scaling LAMP)Alexey Rybak
This 8-hours tutorial was given at various conferences including Percona conference (London), DevConf (Moscow), Highload++ (Moscow).
ABSTRACT
During this tutorial we will cover various topics related to high scalability for the LAMP stack. This workshop is divided into three sections.
The first section covers basic principles of shared nothing architectures and horizontal scaling for the app//cache/database tiers.
Section two of this tutorial is devoted to MySQL sharding techniques, queues and a few performance-related tips and tricks.
In section three we will cover the practical approach for measuring site performance and quality, porviding a "lean" support philosophy, connecting buesiness and technology metrics.
In addition we will cover a very useful Pinba real-time statistical server, it's features and various use cases. All of the sections will be based on real-world examples built in Badoo, one of the biggest dating sites on the Internet.
NoSQL is not a buzzword anymore. The array of non- relational technologies have found wide-scale adoption even in non-Internet scale focus areas. With the advent of the Cloud...the churn has increased even more yet there is no crystal clear guidance on adoption techniques and architectural choices surrounding the plethora of options available. This session initiates you into the whys & wherefores, architectural patterns, caveats and techniques that will augment your decision making process & boost your perception of architecting scalable, fault-tolerant & distributed solutions.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
Search and Society: Reimagining Information Access for Radical Futures
My Sql And Search At Craigslist
1. MySQL and Search at Craigslist
Jeremy Zawodny
jzawodn@craigslist.org
http://craigslist.org/
Jeremy@Zawodny.com
http://jeremy.zawodny.com/blog/
2. Who Am I?
Creator and co-author of High Performance
●
MySQL
Creator of mytop
●
Perl Hacker
●
MySQL Geek
●
Craigslist Engineer (as of July, 2008)
●
MySQL, Data, Search, Perl
–
Ex-Yahoo (Perl, MySQL, Search, Web
●
Services)
4. What is Craigslist?
Local Classifieds
●
Jobs, Housing, Autos, Goods, Services
–
~500 cities world-wide
●
Free
●
Except for jobs in ~18 cities and brokered
–
apartments in NYC
Over 20B pageviews/month
–
50M monthly users
–
50+ countries, multiple languages
–
40+M ads/month, 10+M images
–
6. Technical and other Challenges
High ad churn rate
●
Post half-life can be short
–
Growth
●
High traffic volume
●
Back-end tools and data analysis needs
●
Growth
●
Need to archive postings... forever!
●
100s of millions, searchable
–
Internationalization and UTF-8
●
7. Technical and other Challenges
Small Team
●
Fires take priority
–
Infrastructure gets creaky
–
Organic code and schema growth over years
–
Growth
●
Lack of abstractions
●
Too much embedded SQL in code
–
Documentation vs. Institutional Knowledge
●
“Why do we have things configured like this?”
–
8. Goals
Use Open Source
●
Keep infrastructure small and simple
●
Lower power is good!
–
Efficiency all around
–
Do more with less
–
Keep site easy and appraochable
●
Don't overload with features
–
People are easily confuse
–
9. Craigslist Internals Overview
Load Balancer
Read Proxy Array Write Proxy Array
Perl + memcached
...
Web Read Array Apache 1.3 + mod_perl
Object Cache Search Cluster
Perl + memcached Sphinx
Not Included:
Read DB Cluster MySQL 5.0.xx - user db, image db
- async tasks, email
- accounting, internal tools
- and more!
11. Vertical Partitioning
Different roles have different access patterns
●
Sub-roles based on query type
–
Easier to manage and scale
●
Logical, self-contained data
●
Servers may not need to be as
●
big/fast/expensive
Difficult to do retroactively
●
Various named db “handles” in code
●
13. Horizontal Partitioning: Hydra
Need to retrofit a lot of code
●
Need non-blocking Perl MySQL client
●
Wrapped
●
http://code.google.com/p/perl-mysql-async/
Eventually can size DB boxes based on
●
price/power and adjust mapping function(s)
Choose hardware first
–
Make the db “fit”
–
Archiving lets us age a cluster instead of
●
migrating it's data to a new one.
14. Search Evolution
Problem: Users want to find stuff.
●
Solution: Use MySQL Full Text.
●
...time passes...
●
Problem: MySQL Full Text Doesn't Scale!
●
Solution: Use Sphinx.
●
...time passes...
●
Problem: Sphinx doesn't scale!
●
Solution: Patch Sphinx.
●
15. MySQL Full-Text Problems
Hitting invisible limits
●
CPU not pegged, Memory available
–
Disk I/O not unreasonable
–
Locking / Mutex contention? Probably.
–
MyISAM has occasional crashing / corruption
●
5 clusters of 5 machines
●
Partitioning based on city and category
–
All “hand balanced” and high-maintenance
–
~30M queries/day
●
Close to limits
–
16. Sphinx: My First CL Project
Sphinx is designed for text search
●
Fast and lean C++ code
●
Forking model scales well on multi-core
●
Control over indexing, weighting, etc.
●
Also spent some time looking at Apache Solr
●
17. Search Implementation Details
Partitioning based on cities (each has a
●
numeric id)
Attributes vs. Keywords
●
Persistent Connections
●
Custom client and server modifications
–
Minimal stopword List
●
Partition into 2 clusters (1 master, 4 slaves)
●
18. Sphinx Incremental Indexing
Re-index every N minutes
●
Use main + delta strategy
●
Adopted as: index + today + delta
–
One set per city (~500 * 3)
–
Slaves handle live queries, update via rsync
●
Need lots of FDs
●
Use all 4 cores to index
●
Every night, perform “daily merge”
●
Generate config files via Perl
●
20. Sphinx Issues
Merge bugs [fixed]
●
File descriptor corruption [fixed]
●
Persistent connections [fixed]
●
Overhead of fork() was substantial in our testing
–
200 queries/sec vs. 1,000 queries/sec per box
–
Missing attribute updates [unreported]
●
Bogus docids in responses
●
We need to upgrade to latest Sphinx soon
●
Andrew and team have been excellent!
●
21. Search Project Results
From 25 MySQL Boxes to 10 Sphinx
●
Lots more headroom!
●
New Features
●
Nearby Search
–
No seizing or locking issues
●
1,000+ qps during peak w/room to grow
●
50M queries per day w/steady growth
●
Cluster partitioning built but not needed (yet?)
●
Better separation of code
●
22. Sphinx Wishlist
Efficient delete handling (kill lists)
●
Non-fatal “missing” indexes
●
Index dump tool
●
Live document add/change/delete
●
Built-in replication
●
Stats and counters
●
Text attributes
●
Protocol checksum
●
23. Data Archiving, Replication, Indexes
Problem: We want to keep everything.
●
Solution: Archive to an archive cluster.
●
Problem: Archiving is too painful. Index
●
updates are expensive! Slaves affected.
Solution: Archive with home-grown eventually
●
consistent replication.
24. Data Archiving: OOB Replication
Eventual Consistency
●
Master process
●
SET SQL_LOG_BIN=0
–
Select expired IDs
–
Export records from live master
–
Import records into archive master
–
Delete expired from live master
–
Add IDs to list
–
25. Data Archiving: OOB Replication
Slave process
●
One per MySQL slave
–
Throttled to minimize impact
–
State kept on slave
–
Clone friendly
●
Simple logic
–
Select expired IDs added since my sequence number
●
Delete expired records
●
Update local “last seen” sequence number
●
26. Long Term Data Archiving
Schema coupling is bad
●
ALTER TABLE takes forever
–
Lots of NULLs flying around
–
CouchDB or similar long-term?
●
Schema-free feels like a good fit
–
Tested some home grown solutions already
●
Separate storage and indexing?
●
Indexing with Sphinx?
–
27. Drizzle, XtraDB, Future Stuff
CouchDB looks very interesting. Maybe for
●
archive?
XtraDB / InnoDB plugin
●
Better concurrency
–
Better tuning of InnoDB internals
–
libdrizzle + Perl
●
DBI/DBD may not fit an async model well
–
Can talk to both MySQL and Drizzle!
–
Oracle buying Sun?!?!
●
28. We're Hiring!
Work in San Francisco
●
Flexible, Small Company
●
Excellent Benefits
●
Help Millions of People Every Week
●
We Need Perl/MySQL Hackers
●
Come Help us Scale and Grow
●