The document describes the speaker's role as a data scientist at a social game company. It outlines their analytic architecture which uses MongoDB to store and analyze social data and access logs from their mobile game. Hadoop is used for pre-processing raw log data before loading it into MongoDB. MapReduce operations in MongoDB are used to aggregate and analyze the data to calculate metrics like daily/hourly pageviews and unique users. The results are stored in normalized collections to enable further analysis and visualization of billions of data records.
The flexibility of MongoDB makes it perfect for storing analytics. I'll discuss a few patterns for storing data that we have learned while growing Gaug.es from zero to millions of page views a day. You'll leave with a desire to measure everything and the ability to do it.
Learn how you can enjoy the developer productivity, low TCO, and unlimited scale of MongoDB as a tick database for capturing, analyzing, and taking advantage of opportunities in tick data. This presentation will illustrates how MongoDB can easily and quickly store variable data formats, like top and depth of book, multiple asset classes, and even news and social networking feeds. It will explore aggregating and analyzing tick data in real-time for automated trading or in batch for research and analysis and how auto-sharding enables MongoDB to scale with commodity hardware to satisfy unlimited storage and performance requirements.
The flexibility of MongoDB makes it perfect for storing analytics. I'll discuss a few patterns for storing data that we have learned while growing Gaug.es from zero to millions of page views a day. You'll leave with a desire to measure everything and the ability to do it.
Learn how you can enjoy the developer productivity, low TCO, and unlimited scale of MongoDB as a tick database for capturing, analyzing, and taking advantage of opportunities in tick data. This presentation will illustrates how MongoDB can easily and quickly store variable data formats, like top and depth of book, multiple asset classes, and even news and social networking feeds. It will explore aggregating and analyzing tick data in real-time for automated trading or in batch for research and analysis and how auto-sharding enables MongoDB to scale with commodity hardware to satisfy unlimited storage and performance requirements.
MongoDB and Hadoop work powerfully together as complementary technologies. Learn how the Hadoop connector allows you to use the power of MapReduce to process data sourced from your MongoDB cluster.
Doing Joins in MongoDB: Best Practices for Using $lookupMongoDB
Speaker: Austin Zellner, Solutions Architect, MongoDB
Level: 200 (Intermediate)
Track: Data Analytics
$lookup is a pipeline stage in the aggregation framework that performs a left outer join. In this session, you will learn how to leverage $lookup in your applications and best practices for implementing features with $lookup.
What You Will Learn:
- Fundamentals of $lookup and its syntax.
- How to use $lookup stages in your aggregation pipelines.
- Best practices for using $lookup to implement application features.
The integration between Spring Framework and MongoDB tends to be somewhat unknown. This presentation shows the different projects that compose Spring ecosystem, Springdata, Springboot, SpringIO etc and how to merge between the pure JAVA projects to massive enterprise systems that require the interaction of these systems together.
To understand how to make your application fast, it's important to understand what makes the database fast. We will take a detailed look at how to think about performance, and how different choices in schema design affect your cluster performances depending on storage engines used and physical resources available.
Dev Jumpstart: Build Your First App with MongoDBMongoDB
New to MongoDB? This talk will introduce the philosophy and features of MongoDB. We’ll discuss the benefits of the document-based data model that MongoDB offers by walking through how one can build a simple app. We’ll cover inserting, updating, and querying the database of books. This session will jumpstart your knowledge of MongoDB development, providing you with context for the rest of the day's content.
Dan Sullivan - Data Analytics and Text Mining with MongoDB - NoSQL matters Du...NoSQLmatters
Data analysis is an exploratory process that requires a variety of tools and a flexible data store. Data analysis projects are easy to start but quickly become difficult to manage and error prone when depending on file-based data storage. Relational databases are poorly equipped to accommodate the dynamic demands complex analysis. This talk describes best practices for using MongoDB for analytics projects. Examples will be drawn from a large scale text mining project (approximately 25 million documents) that applies machine learning (neural networks and support vector machines) and statistical analysis. Tools discussed include R, Spark, Python scientific stack, and custom pre-processing scripts but the focus is on using these with the document database.
MongoDB and Hadoop: Driving Business InsightsMongoDB
MongoDB and Hadoop can work together to solve big data problems facing today's enterprises. We will take an in-depth look at how the two technologies complement and enrich each other with complex analyses and greater intelligence. We will take a deep dive into the MongoDB Connector for Hadoop and how it can be applied to enable new business insights with MapReduce, Pig, and Hive, and demo a Spark application to drive product recommendations.
Conceptos básicos. Seminario web 4: Indexación avanzada, índices de texto y g...MongoDB
Este es el cuarto seminario web de la serie Conceptos básicos, en la que se realiza una introducción a la base de datos MongoDB. Este seminario se ve en la compatibilidad con índices de texto libre y geoespaciales.
MongoDB and Hadoop work powerfully together as complementary technologies. Learn how the Hadoop connector allows you to use the power of MapReduce to process data sourced from your MongoDB cluster.
Doing Joins in MongoDB: Best Practices for Using $lookupMongoDB
Speaker: Austin Zellner, Solutions Architect, MongoDB
Level: 200 (Intermediate)
Track: Data Analytics
$lookup is a pipeline stage in the aggregation framework that performs a left outer join. In this session, you will learn how to leverage $lookup in your applications and best practices for implementing features with $lookup.
What You Will Learn:
- Fundamentals of $lookup and its syntax.
- How to use $lookup stages in your aggregation pipelines.
- Best practices for using $lookup to implement application features.
The integration between Spring Framework and MongoDB tends to be somewhat unknown. This presentation shows the different projects that compose Spring ecosystem, Springdata, Springboot, SpringIO etc and how to merge between the pure JAVA projects to massive enterprise systems that require the interaction of these systems together.
To understand how to make your application fast, it's important to understand what makes the database fast. We will take a detailed look at how to think about performance, and how different choices in schema design affect your cluster performances depending on storage engines used and physical resources available.
Dev Jumpstart: Build Your First App with MongoDBMongoDB
New to MongoDB? This talk will introduce the philosophy and features of MongoDB. We’ll discuss the benefits of the document-based data model that MongoDB offers by walking through how one can build a simple app. We’ll cover inserting, updating, and querying the database of books. This session will jumpstart your knowledge of MongoDB development, providing you with context for the rest of the day's content.
Dan Sullivan - Data Analytics and Text Mining with MongoDB - NoSQL matters Du...NoSQLmatters
Data analysis is an exploratory process that requires a variety of tools and a flexible data store. Data analysis projects are easy to start but quickly become difficult to manage and error prone when depending on file-based data storage. Relational databases are poorly equipped to accommodate the dynamic demands complex analysis. This talk describes best practices for using MongoDB for analytics projects. Examples will be drawn from a large scale text mining project (approximately 25 million documents) that applies machine learning (neural networks and support vector machines) and statistical analysis. Tools discussed include R, Spark, Python scientific stack, and custom pre-processing scripts but the focus is on using these with the document database.
MongoDB and Hadoop: Driving Business InsightsMongoDB
MongoDB and Hadoop can work together to solve big data problems facing today's enterprises. We will take an in-depth look at how the two technologies complement and enrich each other with complex analyses and greater intelligence. We will take a deep dive into the MongoDB Connector for Hadoop and how it can be applied to enable new business insights with MapReduce, Pig, and Hive, and demo a Spark application to drive product recommendations.
Conceptos básicos. Seminario web 4: Indexación avanzada, índices de texto y g...MongoDB
Este es el cuarto seminario web de la serie Conceptos básicos, en la que se realiza una introducción a la base de datos MongoDB. Este seminario se ve en la compatibilidad con índices de texto libre y geoespaciales.
MongoDB San Francisco 2013: Storing eBay's Media Metadata on MongoDB present...MongoDB
This session will be a case study of eBay’s experience running MongoDB for project Zoom, in which eBay stores all media metadata for the site. This includes references to pictures of every item for sale on eBay. This cluster is eBay's first MongoDB installation on the platform and is a mission critical application. Yuri Finkelstein, an Enterprise Architect on the team, will provide a technical overview of the project and its underlying architecture.
eBay has developed comprehensive database capacity planning process with 20 years heavy usage of Oracle. With adoption of NoSQL technologies, we are working on to adopt same process with NoSQL, especially Cassandra. eBay has annual traffic peaks during Q4 holiday, we need to proactively review capacity needs and adjust to sail through peak time without any issue. With bench-marking test for each available SKU, we are able to meet business needs without over provisioning.
An Elastic Metadata Store for eBay’s Media PlatformMongoDB
In order to build a robust, multi-tenant, highly available storage services that meet the business’ SLA your databases has to be sharded. But if your service has to scale continuously through the incremental additions of storage without service interruption or human intervention, basic static sharding is not enough. At eBay, we are building MStore to solve this problem, with MongoDB as the storage engine. In this presentation, we will dive into the key design concepts of this solution.
Getting Started with Geospatial Data in MongoDBMongoDB
MongoDB supports geospatial data and specialized indexes that make building location-aware applications easy and scalable.
In this session, you will learn the fundamentals of working with geospatial data in MongoDB. We will explore how to store and index geospatial data and best practices for using geospatial query operators and methods. By the end of this session, you should be able to implement basic geolocation functionality in an application.
In this webinar, you will learn:
- Getting geospatial data into MongoDB and how to build geospatial indexes.
- The fundamentals of MongoDB's geospatial query operators and how to design queries that meet the needs of your application.
- Advanced geospatial capabilities with Java geospatial libraries and MongoDB.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
1. Social Data and Log Analysis
Using MongoDB
2011/03/01(Tue) #mongotokyo
doryokujin
2. Self-Introduction
• doryokujin (Takahiro Inoue), Age: 25
• Education: University of Keio
• Master of Mathematics March 2011 ( Maybe... )
• Major: Randomized Algorithms and Probabilistic Analysis
• Company: Geisha Tokyo Entertainment (GTE)
• Data Mining Engineer (only me, part-time)
• Organized Community:
• MongoDB JP, Tokyo Web Mining
3. My Job
• I’m a Fledgling Data Scientist
• Development of analytical systems for social data
• Development of recommendation systems for social data
• My Interest: Big Data Analysis
• How to generate logs scattered many servers
• How to storage and access to data
• How to analyze and visualization of billions of data
4. Agenda
• My Company’s Analytic Architecture
• How to Handle Access Logs
• How to Handle User Trace Logs
• How to Collaborate with Front Analytic Tools
• My Future Analytic Architecture
5. Agenda Hadoop,
Mongo Map Reduce
• My Company’s Analytic Architecture Hadoop,
Schema Free
• How to Handle Access Logs
• How to Handle User Trace Logs REST Interface,
JSON
• How to Collaborate with Front Analytic Tools
Capped Collection,
• My Future Analytic Architecture Modifier Operation
Of Course Everything With
7. Social Game (Mobile): Omiseyasan
• Enjoy arranging their own shop (and avatar)
• Communicate with other users by shopping, part-time, ...
• Buy seeds of items to display their own shop
13. How to Handle Access Logs
Pretreatment: Trimming, As a Data Server
Validation, Filtering, ...
Back
Up To
S3
14. Access Data Flow
Caution: need
MongoDB >= 1.7.4
user_pageview
agent_pageview daily_pageview
Pretreatment 2nd Map Reduce
user_access hourly_pageview
1st Map Reduce
Group by
15. Hadoop
• Using Hadoop: Pretreatment Raw Records
• [Map / Reduce]
• Read all records
• Split each record by ‘¥s’
• Filter unnecessary records (such as *.swf)
• Check records whether correct or not
• Insert (save) records to MongoDB
※ write operations won’t yet fully utilize all cores
18. 1st Map Reduce
• [Aggregation]
• Group by url, date, userId
• Group by url, date, userAgent
• Group by url, date, time
• Group by url, date, statusCode
• Map Reduce operations runs in parallel on all shards
20. # ( mongodb >= 1.7.4 )
result = db.user_access.map_reduce(map,
reduce,
marge_out="user_pageview",
full_response=True,
query={"date": date})
• About output collection, there are 4 options: (MongoDB >= 1.7.4)
• out : overwrite collection if already exists
• marge_output : merge new data into the old output collection
• reduce_output : reduce operation will be performed on the two values
(the same key on new result and old collection) and the result will be
written to the output collection.
• full_responce (=false) : If True, return on stats on the operation. If False,
No collection will be created, and the whole map-reduce operation will
happen in RAM. The Result set fits within the 8MB/doc limit (16MB/doc
in 1.8?).
21. Map Reduce (>=1.7.4):
out option in JavaScript
• "collectionName" : If you pass a string indicating the name of a collection, then
the output will replace any existing output collection with the same name.
• { merge : "collectionName" } : This option will merge new data into the old
output collection. In other words, if the same key exists in both the result set and
the old collection, the new key will overwrite the old one.
• { reduce : "collectionName" } : If documents exists for a given key in the result
set and in the old collection, then a reduce operation (using the specified reduce
function) will be performed on the two values and the result will be written to
the output collection. If a finalize function was provided, this will be run after
the reduce as well.
• { inline : 1} : With this option, no collection will be created, and the whole map-
reduce operation will happen in RAM. Also, the results of the map-reduce will
be returned within the result object. Note that this option is possible only when
the result set fits within the 8MB limit.
http://www.mongodb.org/display/DOCS/MapReduce
27. Current Map Reduce is Imperfect
• [Single Threads per node]
• Doesn't scale map-reduce across multiple threads
• [Overwrite the Output Collection]
• Overwrite the old collection ( no other options like “marge” or
“reduce” )
# mapreduce code to merge output (MongoDB < 1.7.4)
result = db.user_access.map_reduce(map,
reduce,
full_response=True,
out="temp_collection",
query={"date": date})
[db.user_pageview.save(doc) for doc in temp_collection.find()]
28. Useful Reference: Map Reduce
• http://www.mongodb.org/display/DOCS/MapReduce
• ALookAt MongoDB 1.8's MapReduce Changes
• Map Reduce and Getting Under the Hood with Commands
• Map/reduce runs in parallel/distributed?
• Map/Reduce parallelism with Master/SlaveA
• mapReduce locks the whole server
• mapreduce vs find
33. Hadoop
• Using Hadoop: Pretreatment Raw Records
• [Map / Reduce]
• Split each record by ‘¥s’
• Filter Unnecessary Records
• Check records whether user behaves dishonestly
• Unify format to be able to sum up ( Because raw records are
written by free format )
• Sum up records group by “userId” and “actionType”
• Insert (save) records to MongoDB
※ write operations won’t yet fully utilize all cores
34. An Example of User Trace Log
UserId ActionType ActionDetail
35. An Example of User Trace Log
-----Change------
ActionLogger a{ChangeP} (Point,1371,1383)
ActionLogger a{ChangeP} (Point,2373,2423)
------Get------
ActionLogger a{GetMaterial} (syouhinnomoto,0,-1) The value of “actionDerail”
ActionLogger a{GetMaterial} usesyouhinnomoto
ActionLogger a{GetMaterial} (omotyanomotoPRO,1,6)
must be unified format
-----Trade-----
ActionLogger a{Trade} buy 3 itigoke-kis from gree.jp:00000 #
-----Make-----
ActionLogger a{Make} make item kuronekono_n
ActionLogger a{MakeSelect} make item syouhinnomoto
ActionLogger a{MakeSelect} (syouhinnomoto,0,1)
-----PutOn/Off-----
ActionLogger a{PutOff} put off 1 ksuteras
ActionLogger a{PutOn} put 1 burokkus @2500
-----Clear/Clean-----
ActionLogger a{ClearLuckyStar} Clear LuckyItem_1 4 times
-----Gatcha-----
ActionLogger a{Gacha} Play gacha with first free play:
ActionLogger a{Gacha} Play gacha:
42. Categorize Users
user_trace Attribution • [Categorize Users]
user_registrat
• by play term
Attribution ion
user_charge • by total amount
of charge
• by registration
Attribution
date
user_savedata
user_category
Attribution
• [ Take an Snapshot
of Each Category’s
user_pageview
Stats per Week]
44. Collection: user_category
> var cross = new Cross() # User Definition Function
> MCResign = cross.calc(“2011-02-12”,“MC”,1)
# each value is the number of the user
# Charge(yen)/Term(day)
0(z) ~¥1k(s) ~¥10k(m) ¥100k~(l) total
~1day(z) 50000 10 5 0 50015
~1week(s) 50000 100 50 3 50153
~1month(m) 100000 200 100 1 100301
~3month(l) 100000 300 50 6 100356
month~(ll) 0 0 0 0 0
48. Data Table: jQuery.DataTables
[ Data Table ] •
1 Variable length pagination
2 On-the-fly filtering
3 Multi-column sorting with data
type detection
• Want to Share Daily Summary 4 Smart handling of column widths
5 Scrolling options for table
• Want to See Data from Many
Viewpoint viewport
6 ...
• Want to Implement Easily
49. Graph: jQuery.HighCharts
[ Graph ] •
1. Numerous Chart Types
2. Simple Configuration Syntax
3. Multiple Axes
• Want to Visualize Data 4. Tooltip Labels
• Handle Time Series Data Mainly 5. Zooming
• Want to Implement Easily 6. ...
50. sleepy.mongoose
• [REST Interface + Mongo]
• Get Data by HTTP GET/POST Request
• sleepy.mongoose
‣ request as “/db_name/collection_name/_command”
‣ made by a 10gen engineer: @kchodorow
‣ Sleepy.Mongoose: A MongoDB REST Interface
51. sleepy.mongoose
//start server
> python httpd.py
…listening for connections on http://localhost:27080
//connect to MongoDB
> curl --data server=localhost:27017 'http://localhost:27080/
_connect’
//request example
> http://localhost:27080/playshop/daily_charge/_find?criteria={}
&limit=10&batch_size=10
{"ok": 1, "results": [{“_id": “…”, ”date":… },{“_id”:…}], "id":
0}}
52. JSON: Mongo <---> Ajax
sleepy.mongoose
(REST Interface)
Get
JSON
• jQuery library and MongoDB are compatible
• It is not necessary to describe HTML tag(such as <table>)
70. Summary
• Almighty as a Analytic Data Server
• schema-free: social game data are changeable
• rich queries: important for analyze many point of view
• powerful aggregation: map reduce
• mongo shell: analyze from mongo shell are speedy and handy
• More...
• Scalability: using Replication, Sharding are very easy
• Node.js: It enable us server side scripting with Mongo
72. I ♥ MongoDB JP
• continue to be a organizer of MongoDB JP
• continue to propose many use cases of MongoDB
• ex: Social Data, Log Data, Medical Data, ...
• support MongoDB users
• by document translation, user-group, IRC, blog, book,
twitter,...
• boosting services and products using MongoDB
73. Thank you for coming to
Mongo Tokyo!!
[Contact me]
twitter: doryokujin
skype: doryokujin
mail: mr.stoicman@gmail.com
blog: http://d.hatena.ne.jp/doryokujin/
MongoDB JP: https://groups.google.com/group/mongodb-jp?hl=ja