Introduces important facts and tools to help you get starting with performance improvement.
Learn to monitor and analyze important metrics, then you can start digging and improving.
Includes useful munin probes, predefined SQL queries to investigate your database's performance, and a top 5 of the most common performance problems in custom Apps.
By Olivier Dony - Lead Developer & Community Manager, OpenERP
Odoo Online platform: architecture and challengesOdoo
A short introduction to the technical architecture of the Odoo Online platform, including the advanced integrated features (instant DNS, email gateways, etc.), and the technical aspect of the SLA.
By Olivier Dony - Lead Developer & Community Manager, OpenERP
Odoo Online platform: architecture and challengesOdoo
A short introduction to the technical architecture of the Odoo Online platform, including the advanced integrated features (instant DNS, email gateways, etc.), and the technical aspect of the SLA.
By Olivier Dony - Lead Developer & Community Manager, OpenERP
This session will be particularly interesting to beginning developers, taking their first steps into the wide world of software development.
Normally you're always taught how to write code and how to keep it clean. But when business grows, performance issues arise and it's not always obvious how to solve them. This talk will teach you how to profile your code, how to detect issues, how to read the results, and how to somewhat automate those tests.
The purpose of this paper is to demonstrate that it is possible to have an Odoo deployment that costs less than $100/month for 50 concurrent users. Moreover, this system will be always available and fault-tolerant and very much scalable. All this, because of its cloud architecture.
Web services are a set of tools available over the internet or intranet networks which use the standardized messaging system to transfer data between applications or systems.
Web services allow interaction between different systems or applications using standard libraries such as HTML, XML, WSDL, and SOAP.
In Odoo 13 partner creation is 7X faster than Odoo 12. In Odoo 13 modify sales order is 11X faster than Odoo 12. In Odoo 13 create and Post invoice speed is 3X faster than Odoo 12.
In this blog, you are going to discuss the “Comparison between Odoo 12 and Odoo 13”.
Benchmark OpenERP : testez les performances et la robustesse d'OpenERP sur vos volumes de données.
Le benchmark technique d'OpenERP propose des scripts automatiques pour tester la charge d'OpenERP vis-à-vis de vos données et de votre activité.
This presentation was prepared for a Webcast where John Yerhot, Engine Yard US Support Lead, and Chris Kelly, Technical Evangelist at New Relic discussed how you can scale and improve the performance of your Ruby web apps. They shared detailed guidance on issues like:
Caching strategies
Slow database queries
Background processing
Profiling Ruby applications
Picking the right Ruby web server
Sharding data
Attendees will learn how to:
Gain visibility on site performance
Improve scalability and uptime
Find and fix key bottlenecks
See the on-demand replay:
http://pages.engineyard.com/6TipsforImprovingRubyApplicationPerformance.html
Docker Logging and analysing with Elastic StackJakub Hajek
Collecting logs from the entire stateless environment is challenging parts of the application lifecycle. Correlating business logs with operating system metrics to provide insights is a crucial part of the entire organization. What aspects should be considered while you design your logging solutions?
This session will be particularly interesting to beginning developers, taking their first steps into the wide world of software development.
Normally you're always taught how to write code and how to keep it clean. But when business grows, performance issues arise and it's not always obvious how to solve them. This talk will teach you how to profile your code, how to detect issues, how to read the results, and how to somewhat automate those tests.
The purpose of this paper is to demonstrate that it is possible to have an Odoo deployment that costs less than $100/month for 50 concurrent users. Moreover, this system will be always available and fault-tolerant and very much scalable. All this, because of its cloud architecture.
Web services are a set of tools available over the internet or intranet networks which use the standardized messaging system to transfer data between applications or systems.
Web services allow interaction between different systems or applications using standard libraries such as HTML, XML, WSDL, and SOAP.
In Odoo 13 partner creation is 7X faster than Odoo 12. In Odoo 13 modify sales order is 11X faster than Odoo 12. In Odoo 13 create and Post invoice speed is 3X faster than Odoo 12.
In this blog, you are going to discuss the “Comparison between Odoo 12 and Odoo 13”.
Benchmark OpenERP : testez les performances et la robustesse d'OpenERP sur vos volumes de données.
Le benchmark technique d'OpenERP propose des scripts automatiques pour tester la charge d'OpenERP vis-à-vis de vos données et de votre activité.
This presentation was prepared for a Webcast where John Yerhot, Engine Yard US Support Lead, and Chris Kelly, Technical Evangelist at New Relic discussed how you can scale and improve the performance of your Ruby web apps. They shared detailed guidance on issues like:
Caching strategies
Slow database queries
Background processing
Profiling Ruby applications
Picking the right Ruby web server
Sharding data
Attendees will learn how to:
Gain visibility on site performance
Improve scalability and uptime
Find and fix key bottlenecks
See the on-demand replay:
http://pages.engineyard.com/6TipsforImprovingRubyApplicationPerformance.html
Docker Logging and analysing with Elastic StackJakub Hajek
Collecting logs from the entire stateless environment is challenging parts of the application lifecycle. Correlating business logs with operating system metrics to provide insights is a crucial part of the entire organization. What aspects should be considered while you design your logging solutions?
Docker Logging and analysing with Elastic Stack - Jakub Hajek PROIDEA
Collecting logs from the entire stateless environment is challenging parts of the application lifecycle. Correlating business logs with operating system metrics to provide insights is a crucial part of the entire organization. We will see the technical presentation on how to manage a large amount of the data in a typical environment with microservices.
These slides show how to reduce latency on websites and reduce bandwidth for improved user experience.
Covering network, compression, caching, etags, application optimisation, sphinxsearch, memcache, db optimisation
Description
At Stitch Fix most application logs are output in a structured JSON format for simpler debugging and downstream consumption.
In this talk we’ll cover in more detail why structured logs are useful and provide leverage, caveats to using them, and how simple it is to get one going with Python.
Abstract
At Stitch Fix most application logs are output in a structured JSON format for simpler debugging and downstream consumption. For example, data scientists can add a field to their application log and it will automatically turn up as a parsed field in Elasticsearch for easy dashboarding and querying via Kibana, or be easily found and queried in Presto. In this talk we’ll cover in more detail why structured logs are useful and provide leverage, caveats to using them, and how simple it is to get one going with Python.
Talk for PerconaLive 2016 by Brendan Gregg. Video: https://www.youtube.com/watch?v=CbmEDXq7es0 . "Systems performance provides a different perspective for analysis and tuning, and can help you find performance wins for your databases, applications, and the kernel. However, most of us are not performance or kernel engineers, and have limited time to study this topic. This talk summarizes six important areas of Linux systems performance in 50 minutes: observability tools, methodologies, benchmarking, profiling, tracing, and tuning. Included are recipes for Linux performance analysis and tuning (using vmstat, mpstat, iostat, etc), overviews of complex areas including profiling (perf_events), static tracing (tracepoints), and dynamic tracing (kprobes, uprobes), and much advice about what is and isn't important to learn. This talk is aimed at everyone: DBAs, developers, operations, etc, and in any environment running Linux, bare-metal or the cloud."
Talk given at the London AICamp meet up on the 13 July 2023. It's an introduction on building open-source ChatGPT-like chat bots and some of the considerations to have while training/tuning them using Airflow.
OSMC 2018 | Learnings, patterns and Uber’s metrics platform M3, open sourced ...NETWAYS
At Uber we use high cardinality monitoring to observe and detect issues with our 4,000 microservices running on Mesos and across our infrastructure systems and servers. We’ll cover how we put the resulting 6 billion plus time series to work in a variety of different ways, auto-discovering services and their usage of other systems at Uber, setting up and tearing down alerts automatically for services, sending smart alert notifications that rollup different failures into individual high level contextual alerts, and more. We’ll also talk about how we accomplish all this with a global view of our systems with M3, our open source metrics platform. We’ll take a deep dive look at how we use M3DB, now available as an open source Prometheus long term storage backend, to horizontally scale our metrics platform in a cost efficient manner with a system that’s still sane to operate with petabytes of metrics data.
PGConf APAC 2018 - Monitoring PostgreSQL at ScalePGConf APAC
Speaker: Lukas Fittl
Your PostgreSQL database is one of the most important pieces of your architecture - yet the level of introspection available in Postgres is often hard to work with. Its easy to get very detailed information, but what should you really watch out for, send reports on and alert on?
In this talk we'll discuss how query performance statistics can be made accessible to application developers, critical entries one should monitor in the PostgreSQL log files, how to collect EXPLAIN plans at scale, how to watch over autovacuum and VACUUM operations, and how to flag issues based on schema statistics.
We'll also talk a bit about monitoring multi-server setups, first going into high availability and read standbys, logical replication, and then reviewing how monitoring looks like for sharded databases like Citus.
The talk will primarily describe free/open-source tools and statistics views readily available from within Postgres.
pg_proctab: Accessing System Stats in PostgreSQLMark Wong
pg_proctab is a collection of PostgreSQL stored functions that provide access to the operating system process table using SQL. We'll show you which functions are available and where they collect the data, and give examples of their use to collect processor and I/O statistics on SQL queries.
Mark Wong
pg_proctab is a collection of PostgreSQL stored functions that provide access to the operating system process table using SQL. We'll show you which functions are available and where they collect the data, and give examples of their use to collect processor and I/O statistics on SQL queries.
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...Juraj Vysvader
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I didn't get rich from it but it did have 63K downloads (powered possible tens of thousands of websites).
Unleash Unlimited Potential with One-Time Purchase
BoxLang is more than just a language; it's a community. By choosing a Visionary License, you're not just investing in your success, you're actively contributing to the ongoing development and support of BoxLang.
Software Engineering, Software Consulting, Tech Lead.
Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Security,
Spring Transaction, Spring MVC,
Log4j, REST/SOAP WEB-SERVICES.
Accelerate Enterprise Software Engineering with PlatformlessWSO2
Key takeaways:
Challenges of building platforms and the benefits of platformless.
Key principles of platformless, including API-first, cloud-native middleware, platform engineering, and developer experience.
How Choreo enables the platformless experience.
How key concepts like application architecture, domain-driven design, zero trust, and cell-based architecture are inherently a part of Choreo.
Demo of an end-to-end app built and deployed on Choreo.
Prosigns: Transforming Business with Tailored Technology SolutionsProsigns
Unlocking Business Potential: Tailored Technology Solutions by Prosigns
Discover how Prosigns, a leading technology solutions provider, partners with businesses to drive innovation and success. Our presentation showcases our comprehensive range of services, including custom software development, web and mobile app development, AI & ML solutions, blockchain integration, DevOps services, and Microsoft Dynamics 365 support.
Custom Software Development: Prosigns specializes in creating bespoke software solutions that cater to your unique business needs. Our team of experts works closely with you to understand your requirements and deliver tailor-made software that enhances efficiency and drives growth.
Web and Mobile App Development: From responsive websites to intuitive mobile applications, Prosigns develops cutting-edge solutions that engage users and deliver seamless experiences across devices.
AI & ML Solutions: Harnessing the power of Artificial Intelligence and Machine Learning, Prosigns provides smart solutions that automate processes, provide valuable insights, and drive informed decision-making.
Blockchain Integration: Prosigns offers comprehensive blockchain solutions, including development, integration, and consulting services, enabling businesses to leverage blockchain technology for enhanced security, transparency, and efficiency.
DevOps Services: Prosigns' DevOps services streamline development and operations processes, ensuring faster and more reliable software delivery through automation and continuous integration.
Microsoft Dynamics 365 Support: Prosigns provides comprehensive support and maintenance services for Microsoft Dynamics 365, ensuring your system is always up-to-date, secure, and running smoothly.
Learn how our collaborative approach and dedication to excellence help businesses achieve their goals and stay ahead in today's digital landscape. From concept to deployment, Prosigns is your trusted partner for transforming ideas into reality and unlocking the full potential of your business.
Join us on a journey of innovation and growth. Let's partner for success with Prosigns.
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
Into the Box Keynote Day 2: Unveiling amazing updates and announcements for modern CFML developers! Get ready for exciting releases and updates on Ortus tools and products. Stay tuned for cutting-edge innovations designed to boost your productivity.
Cyaniclab : Software Development Agency Portfolio.pdfCyanic lab
CyanicLab, an offshore custom software development company based in Sweden,India, Finland, is your go-to partner for startup development and innovative web design solutions. Our expert team specializes in crafting cutting-edge software tailored to meet the unique needs of startups and established enterprises alike. From conceptualization to execution, we offer comprehensive services including web and mobile app development, UI/UX design, and ongoing software maintenance. Ready to elevate your business? Contact CyanicLab today and let us propel your vision to success with our top-notch IT solutions.
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
Top Features to Include in Your Winzo Clone App for Business Growth (4).pptxrickgrimesss22
Discover the essential features to incorporate in your Winzo clone app to boost business growth, enhance user engagement, and drive revenue. Learn how to create a compelling gaming experience that stands out in the competitive market.
Enhancing Project Management Efficiency_ Leveraging AI Tools like ChatGPT.pdfJay Das
With the advent of artificial intelligence or AI tools, project management processes are undergoing a transformative shift. By using tools like ChatGPT, and Bard organizations can empower their leaders and managers to plan, execute, and monitor projects more effectively.
Large Language Models and the End of ProgrammingMatt Welsh
Talk by Matt Welsh at Craft Conference 2024 on the impact that Large Language Models will have on the future of software development. In this talk, I discuss the ways in which LLMs will impact the software industry, from replacing human software developers with AI, to replacing conventional software with models that perform reasoning, computation, and problem-solving.
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
How Recreation Management Software Can Streamline Your Operations.pptxwottaspaceseo
Recreation management software streamlines operations by automating key tasks such as scheduling, registration, and payment processing, reducing manual workload and errors. It provides centralized management of facilities, classes, and events, ensuring efficient resource allocation and facility usage. The software offers user-friendly online portals for easy access to bookings and program information, enhancing customer experience. Real-time reporting and data analytics deliver insights into attendance and preferences, aiding in strategic decision-making. Additionally, effective communication tools keep participants and staff informed with timely updates. Overall, recreation management software enhances efficiency, improves service delivery, and boosts customer satisfaction.
Experience our free, in-depth three-part Tendenci Platform Corporate Membership Management workshop series! In Session 1 on May 14th, 2024, we began with an Introduction and Setup, mastering the configuration of your Corporate Membership Module settings to establish membership types, applications, and more. Then, on May 16th, 2024, in Session 2, we focused on binding individual members to a Corporate Membership and Corporate Reps, teaching you how to add individual members and assign Corporate Representatives to manage dues, renewals, and associated members. Finally, on May 28th, 2024, in Session 3, we covered questions and concerns, addressing any queries or issues you may have.
For more Tendenci AMS events, check out www.tendenci.com/events
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
Custom Healthcare Software for Managing Chronic Conditions and Remote Patient...Mind IT Systems
Healthcare providers often struggle with the complexities of chronic conditions and remote patient monitoring, as each patient requires personalized care and ongoing monitoring. Off-the-shelf solutions may not meet these diverse needs, leading to inefficiencies and gaps in care. It’s here, custom healthcare software offers a tailored solution, ensuring improved care and effectiveness.
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
2. Odoo can handle large data
and transaction volumes out
of the box!
3. On Odoo Online, a typical
server hosts more than
3000 instances
100/200 new ones/day
4. Typical size of large deployments
Multi-GB database (10-20GB)
Multi-million records tables
o Stock moves
o Journal items
o Mails / Leads
On a single Odoo server!
8. PostgreSQL
o Is the real workhorse of your Odoo server
o Powers large cloud services
o Can handle terabytes of data efficiently
o Should be fine-tuned to use your hardware
o Cannot magically fix algorithmic/complexity
issues in [y]our code!
9. Hardware Sizing
o 2014 recommandation for single user
server for up to ~100 active users
o Intel Xeon E5 2.5Ghz 6c/12t (e.g. E5-1650v2)
o 32GB RAM
o SATA/SAS RAID-1
o On Odoo online, this spec handles 3000 dbs
with a load average ≤ 3
10. Transaction Sizing
o Typical read transaction takes ~100ms
o A single process can handle ~6 t/s
o 8 worker processes = ~50 t/s
o 1 interactive user = ~50 t/m peak = ~1 t/s
o Peak use with 100 users = 100 t/s
o On average, 5-10% of peak = 5-10 t/s
11. SQL numbers
o Most complex SQL queries should be under
100ms, and the simplest ones < 5ms
o RPC read transactions: <40 queries
o RPC write transactions: 200+ queries
o One DB transaction = 100-300 heavy locks
12. Sizing
For anything else, appropriate load testing
is a must before going live!
Then size accordingly...
17. PostgreSQL Deployment
o Use PostgreSQL 9.2/9.3 for performance
o Tune it: http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server
o Avoid deploying PostgreSQL on a VM
o If you must, optimize the VM for IOPS
o Check out vFabric vPostgres 9.2
o Use separate disks for SYSTEM/DATA/WAL
o shared_buffers: more than 55% VM RAM
o Enable guest memory ballooning driver
20. Monitor & Measure
o Get the pulse of your deployments
o System load
o Disk I/O
o Transactions per second
o Database size
o Recommended tool: munin
o --log-level=debug_rpc in Production!
2014-05-03 12:22:32,846 9663 DEBUG test openerp.netsvc.rpc.request:
object.execute_kw time:0.031s mem: 763716k -> 763716k (diff: 0k)('test',1,
'*','sale.order','read',(...),{...})
21. Monitor & Measure
o Build your munin
dashboard
o Establish what the “usual
level of performance” is
o Add your own specific
metrics
o It will be invaluable later,
even if you don't know yet
22. Monitor & Measure
#!/bin/sh
#%# family=manual
#%# capabilities=autoconf suggest
case $1 in
autoconf)
exit 0
;;
suggest)
exit 0
;;
config)
echo graph_category openerp
echo graph_title openerp rpc request count
echo graph_vlabel num requests/minute in last 5 minutes
echo requests.label num requests
exit 0
;;
esac
# watch out for the time zone of the logs => using date -u for UTC timestamps
result=$(tail -60000 /var/log/odoo.log | grep "object.execute_kw time" | awk "BEGIN{count=0} ($1 " "
$2) >= "`date +'%F %H:%M:%S' -ud '5 min ago'`" { count+=1; } END{print count/5}")
echo "requests.value ${result}"
exit 0
Munin plugin for transactions/minute
23. Monitor & Measure
#!/bin/sh
#%# family=manual
#%# capabilities=autoconf suggest
case $1 in
config)
echo graph_category openerp
echo graph_title openerp rpc requests min/average response time
echo graph_vlabel seconds
echo graph_args --units-exponent -3
echo min.label min
echo min.warning 1
echo min.critical 5
echo avg.label average
echo avg.warning 1
echo avg.critical 5
exit 0
;;
esac
# watch out for the time zone of the logs => using date -u for UTC timestamps
result=$(tail -60000 /var/log/openerp.log | grep "object.execute_kw time" | awk "BEGIN{sum=0;count=0} (
$1 " " $2) >= "`date +'%F %H:%M:%S' -ud '5 min ago'`" {split($8,t,":");time=0+t[2];if (min=="")
{ min=time}; sum += time; count+=1; min=(time>min)?min:time } END{print min, sum/count}")
echo -n "min.value "
echo ${result} | cut -d" " -f1
echo -n "avg.value "
echo ${result} | cut -d" " -f2
exit 0
Munin plugin for response time
24. Monitor PostgreSQL
o Munin has many builtin plugins (enabled with
symlinks)
o Enable extra logging in postgresql.conf
o log_min_duration_statement = 50
●
Set to 0 to log all queries
●
Instagram gist to capture sample + analyze
o lc_messages = 'C'
●
For automated log analysis
26. Analysis – Where to start?
o Many factors can impact performance
o Hardware bottlenecks (check munin graphs!)
o Business logic burning CPU
●
use `kill -3 ${odoo_pid}` for live traces
o Transaction locking in the database
o SQL query performance
27. Analysis – SQL Logs
o Thanks to extra PostgreSQL logging you can use
pg_badger to analyze the query log
o Produces a very insightful statistical report
o Use EXPLAIN ANALYZE to check the behavior
of suspicious queries
o Keep in mind that PostgreSQL uses the fastest way,
not necessarily the one you expect (e.g. indexes not
always used if sequential scan is faster)
28. PostgreSQL Analysis
o Important statistics tables
o pg_stat_activity: real-time queries/transactions
o pg_locks: real-time transaction heavy locks
o pg_stat_user_tables: generic use stats for tables
o pg_statio_user_tables: I/O stats for tables
29. Analysis – Longest tables
# SELECT schemaname || '.' || relname as table, n_live_tup as
num_rows
FROM pg_stat_user_tables
ORDER BY n_live_tup DESC LIMIT 10;
table num_rows
public.stock_move 179544
public.ir_translation 134039
public.wkf_workitem 97195
public.wkf_instance 96973
public.procurement_order 83077
public.ir_property 69011
public.ir_model_data 59532
public.stock_move_history_ids 58942
public.mrp_production_move_ids 49714
public.mrp_bom 46258
30. Analysis – Biggest tables
# SELECT nspname || '.' || relname AS "table",
pg_size_pretty(pg_total_relation_size(C.oid)) AS
"total_size"
FROM pg_class C
LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)
WHERE nspname NOT IN ('pg_catalog', 'information_schema')
AND C.relkind <> 'i'
AND nspname !~ '^pg_toast'
ORDER BY pg_total_relation_size(C.oid) DESC
LIMIT 10;
┌──────────────────────────────────────────┬────────────┐
│ table │ total_size │
├──────────────────────────────────────────┼────────────┤
│ public.stock_move │ 525 MB │
│ public.wkf_workitem │ 111 MB │
│ public.procurement_order │ 80 MB │
│ public.stock_location │ 63 MB │
│ public.ir_translation │ 42 MB │
│ public.wkf_instance │ 37 MB │
│ public.ir_model_data │ 36 MB │
│ public.ir_property │ 26 MB │
│ public.ir_attachment │ 14 MB │
│ public.mrp_bom │ 13 MB │
└──────────────────────────────────────────┴────────────┘
31. Reduce database size
o Enable filestore for attachments (see FAQ)
o No files in binary fields, use the filestore
Faster dumps and backups
Filestore easy to rsync for backups too
34. Analysis – Locking (9.1)
-- For PostgreSQL 9.1
create view pg_waiter_holder as
select
wait_act.datname,
pg_class.relname,
wait_act.usename,
waiter.pid as waiterpid,
waiter.locktype,
waiter.transactionid as xid,
waiter.virtualtransaction as wvxid,
waiter.mode as wmode,
wait_act.waiting as wwait,
substr(wait_act.current_query,1,30) as wquery,
age(now(),wait_act.query_start) as wdur,
holder.pid as holderpid,
holder.mode as hmode,
holder.virtualtransaction as hvxid,
hold_act.waiting as hwait,
substr(hold_act.current_query,1,30) as hquery,
age(now(),hold_act.query_start) as hdur
from pg_locks holder join pg_locks waiter on (
holder.locktype = waiter.locktype and (
holder.database, holder.relation,
holder.page, holder.tuple,
holder.virtualxid,
holder.transactionid, holder.classid,
holder.objid, holder.objsubid
) is not distinct from (
waiter.database, waiter.relation,
waiter.page, waiter.tuple,
waiter.virtualxid,
waiter.transactionid, waiter.classid,
waiter.objid, waiter.objsubid
))
join pg_stat_activity hold_act on (holder.pid=hold_act.procpid)
join pg_stat_activity wait_act on (waiter.pid=wait_act.procpid)
left join pg_class on (holder.relation = pg_class.oid)
where holder.granted and not waiter.granted
order by wdur desc;
35. Analysis – Locking (9.2)
-- For PostgreSQL 9.2
create view pg_waiter_holder as
select
wait_act.datname,
wait_act.usename,
waiter.pid as wpid,
holder.pid as hpid,
waiter.locktype as type,
waiter.transactionid as xid,
waiter.virtualtransaction as wvxid,
holder.virtualtransaction as hvxid,
waiter.mode as wmode,
holder.mode as hmode,
wait_act.state as wstate,
hold_act.state as hstate,
pg_class.relname,
substr(wait_act.query,1,30) as wquery,
substr(hold_act.query,1,30) as hquery,
age(now(),wait_act.query_start) as wdur,
age(now(),hold_act.query_start) as hdur
from pg_locks holder join pg_locks waiter on (
holder.locktype = waiter.locktype and (
holder.database, holder.relation,
holder.page, holder.tuple,
holder.virtualxid,
holder.transactionid, holder.classid,
holder.objid, holder.objsubid
) is not distinct from (
waiter.database, waiter.relation,
waiter.page, waiter.tuple,
waiter.virtualxid,
waiter.transactionid, waiter.classid,
waiter.objid, waiter.objsubid
))
join pg_stat_activity hold_act on (holder.pid=hold_act.pid)
join pg_stat_activity wait_act on (waiter.pid=wait_act.pid)
left join pg_class on (holder.relation = pg_class.oid)
where holder.granted and not waiter.granted
order by wdur desc;
36. Analysis – Locking
o Verify blocked queries
o Update to PostgreSQL 9.3 is possible
o More efficient locking for Foreign Keys
o Try pg_activity (top-like): pip install pg_activity
# SELECT * FROM waiter_holder;
relname | wpid | hpid | wquery | wdur | hquery
---------+-------+-------+--------------------------------+------------------+-----------------------------
| 16504 | 16338 | update "stock_quant" set "s | 00:00:57.588357 | <IDLE> in transaction
| 16501 | 16504 | update "stock_quant" set "f | 00:00:55.144373 | update "stock_quant"
(2 lignes) ... hquery | hdur | wmode | hmode |
... ------------------------------+-------------------+-----------+---------------|
... <IDLE> in transaction | 00:00:00.004754 | ShareLock | ExclusiveLock |
... update "stock_quant" set "s | 00:00:57.588357 | ShareLock | ExclusiveLock |
38. Top 5 Problems in Custom Apps
o 1. Wrong use of stored computed fields
o 2. Domain evaluation strategy
o 3. Business logic triggered too often
o 4. Misuse of the batch API
o 5. Custom locking
39. 1. Stored computed fields
o Be vary careful when you add stored computed fields
(using the old API)
o Manually set the right trigger fields + func
store = {'trigger_model': (mapping_function,
[fields...],
priority) }
store = True is a shortcut for:
{self._name: (lambda s,c,u,ids,c: ids,
None,10)}
o Do not add this on master data (products, locations,
users, companies, etc.)
40. 2. Domain evaluation strategy
o Odoo cross-object domain expressions do not use
JOINs by default, to respect modularity and ACLs
o e.g. search([('picking_id.move_ids.partner_id', '!=', False)])
o Searches all moves without partner!
o Then uses “ id IN <found_move_ids>”!
o Imagine this in record rules (global security filter)
o Have a look at auto_join (v7.0+)
'move_ids': fields.one2many('stock.move', 'picking_id',
string='Moves', auto_join=True)
41. 3. Busic logic triggered too often
o Think about it twice when you override
create() or write() to add your stuff
o How often will this be called? Should it be?
o Think again if you do it on a high-volume
object, such as o2m line records
(sale.order.line, stock.move, …)
o Again, make sure you don't alter master data
42. 4. Misuse of batch API
o The API works with batches
o Computed fields work in batches
o Model.browse() pre-fetches in batches
o See @one in the new API
43. 5. Custom Locking
o In general PostgreSQL and the ORM do all the DB and
Python locking we need
o Rare cases with manual DB locking
o Inter-process mutex in db (ir.cron)
o Sequence numbers
o Reservations in double-entry systems
o Python locking
o Caches and shared resources (db pool)
o You probably do not need more than this!