This document discusses packaging and distributing Python applications. It begins with introductions from the author and thanks to various contributors. It then covers using distutils and setuptools for building and distributing packages, including the egg format. The document also discusses solutions for common packaging problems like having a single point of failure for PyPI, needing private packages, and making plone.org/products compatible with PyPI. It promotes using tools like zc.buildout, collective.dist, and running your own private PyPI server. Finally, it contrasts the complexity of installing packages in 2005 versus the simplicity enabled by tools like zc.buildout today.
distribute und pip als Ersatz für setuptools und easy_install bieten im Zusammenspiel mit virtualenv viele neue Möglichkeiten bei der Entwicklung und dem Deployment von Python-Applikationen. In diesem Vortrag stelle ich alle Werkzeuge kurz vor und zeige, wie man sie zusammen einsetzen kann.
Python is a great programming language. It is a complete tutorial of using this programming language.
This slides is split into two parts, and it is the first part. Another part is at: http://www.slideshare.net/moskytw/programming-with-python-adv.
Deploying your software can become a tricky task, regardless of the language. In the spirit of the Python conferences, every conference needs at least one packaging talk.
This talk is about dh-virtualenv. It's a Python packaging tool aimed for Debian-based systems and for deployment flows that already take advantage of Debian packaging with Python virtualenvs
Containers for Science and High-Performance ComputingDmitry Spodarets
Within this talk, we will explore how Singularity liberates non-privileged users and host resources (such as interconnects, resource managers, file systems, accelerators, etc.) allowing users to take full control to set-up and run in their native environments. This talk explores how Singularity combines software packaging models with minimalistic containers to create very lightweight application bundles which can be simply executed and contained completely within their environment or be used to interact directly with the host file systems at native speeds. A Singularity application bundle can be as simple as containing a single binary application or as complicated as containing an entire workflow and is as flexible as you will need.
Building web applications?
Thinking about auto-updater?
Need to document your releases?
Then look at this presentation.
You'll likely discover another point of view on these questions.
It's no news or secret that containers are good at providing multiple and different testing environments, or at offering a way of deploying apps and services that are completely decoupled from the host OS. E.g., spin up a distro X container, check if code compiles there (and dispose of it).
How about the opposite? I.e., having one (or more!) stateful and persistent environment(s), tightly coupled with the host and sharing as much information and configuration as possible with it. Why? Well for running that one app, which is only available for another distro, with just a click on a desktop launcher icon. Or for doing all kind of experiments, inside our development environment, without risking the stability and the consistency of the system. Well, yes, containers can do these things too. And in openSUSE, we have both toolbox and distrobox, that can make these examples, just reality!
In this talk, we'll explain what they are and how to use them for spawning development and application environments, based either on the same distro you have on the host or on different ones, and inside of which you still have all your file. A working space that, despite being containerized, you can access seamlessly from within GNOME Builder or open new terminals directly inside of it and create launcher icons for apps installed in there.
We'll offer (more) examples and show how this can be very useful, both on immutable (like MicroOS) and on "traditional" (like Tumbleweed) systems.
Nagios Conference 2012 - Dave Williams - Embedding Nagios using RaspberyPiNagios
Dave Williams' presentation on embedding Nagios on a RaspberyPi
The presentation was given during the Nagios World Conference North America held Sept 25-28th, 2012 in Saint Paul, MN. For more information on the conference (including photos and videos), visit: http://go.nagios.com/nwcna
Advice on how to make Python projects more accessible to newcomers, and how to improve your build and environment consistency.
Presented at MelbDjango 2018-08-16.
A historical discussion along with a survey of the current landscape of Python packaging. Also learn the basics of uploading your package to PyPi.
Presentation was given at the IndyPy user group meeting in February 2014.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
More Related Content
Similar to delivering applications with zc.buildout and a distributed model - Plone Conference 2008
distribute und pip als Ersatz für setuptools und easy_install bieten im Zusammenspiel mit virtualenv viele neue Möglichkeiten bei der Entwicklung und dem Deployment von Python-Applikationen. In diesem Vortrag stelle ich alle Werkzeuge kurz vor und zeige, wie man sie zusammen einsetzen kann.
Python is a great programming language. It is a complete tutorial of using this programming language.
This slides is split into two parts, and it is the first part. Another part is at: http://www.slideshare.net/moskytw/programming-with-python-adv.
Deploying your software can become a tricky task, regardless of the language. In the spirit of the Python conferences, every conference needs at least one packaging talk.
This talk is about dh-virtualenv. It's a Python packaging tool aimed for Debian-based systems and for deployment flows that already take advantage of Debian packaging with Python virtualenvs
Containers for Science and High-Performance ComputingDmitry Spodarets
Within this talk, we will explore how Singularity liberates non-privileged users and host resources (such as interconnects, resource managers, file systems, accelerators, etc.) allowing users to take full control to set-up and run in their native environments. This talk explores how Singularity combines software packaging models with minimalistic containers to create very lightweight application bundles which can be simply executed and contained completely within their environment or be used to interact directly with the host file systems at native speeds. A Singularity application bundle can be as simple as containing a single binary application or as complicated as containing an entire workflow and is as flexible as you will need.
Building web applications?
Thinking about auto-updater?
Need to document your releases?
Then look at this presentation.
You'll likely discover another point of view on these questions.
It's no news or secret that containers are good at providing multiple and different testing environments, or at offering a way of deploying apps and services that are completely decoupled from the host OS. E.g., spin up a distro X container, check if code compiles there (and dispose of it).
How about the opposite? I.e., having one (or more!) stateful and persistent environment(s), tightly coupled with the host and sharing as much information and configuration as possible with it. Why? Well for running that one app, which is only available for another distro, with just a click on a desktop launcher icon. Or for doing all kind of experiments, inside our development environment, without risking the stability and the consistency of the system. Well, yes, containers can do these things too. And in openSUSE, we have both toolbox and distrobox, that can make these examples, just reality!
In this talk, we'll explain what they are and how to use them for spawning development and application environments, based either on the same distro you have on the host or on different ones, and inside of which you still have all your file. A working space that, despite being containerized, you can access seamlessly from within GNOME Builder or open new terminals directly inside of it and create launcher icons for apps installed in there.
We'll offer (more) examples and show how this can be very useful, both on immutable (like MicroOS) and on "traditional" (like Tumbleweed) systems.
Nagios Conference 2012 - Dave Williams - Embedding Nagios using RaspberyPiNagios
Dave Williams' presentation on embedding Nagios on a RaspberyPi
The presentation was given during the Nagios World Conference North America held Sept 25-28th, 2012 in Saint Paul, MN. For more information on the conference (including photos and videos), visit: http://go.nagios.com/nwcna
Advice on how to make Python projects more accessible to newcomers, and how to improve your build and environment consistency.
Presented at MelbDjango 2018-08-16.
A historical discussion along with a survey of the current landscape of Python packaging. Also learn the basics of uploading your package to PyPi.
Presentation was given at the IndyPy user group meeting in February 2014.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Securing your Kubernetes cluster: a step-by-step guide to success!KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
delivering applications with zc.buildout and a distributed model - Plone Conference 2008
1. delivering applications
with zc.buildout
and a distributed model
Tarek Ziadé <tarek.ziade@ingeniweb.com>
1
2. Who am I ?
New to Plone (~1 year) -> I worked on peripheral matters
Used to Zope (CPS core developer)
CTO at Ingeniweb - a Plone company (>20 developers)
Python fan - wrote some books about it
(even in english, because you don’t hear my accent in a book)
involved in plone.org migration, PSC, zc.buildout
2
3. It will look good on your desk
besides Martin Aspeli’s book
Special promotion during the PloneConf :
buy one book == get a big hug
3
4. Thanks !
* Andreas Jung
* Veda Williams
* David Glick
* Jeff Kowalczyk
* Youenn Boussard
* Christian Klinger
* Jesse Snyder
* Alec Mitchell
* John Habermann
* Maurits van Rees
* Jean-François Roche
* Martin Aspeli
* Alain Meurant
* Aleksandr Vladimirskiy
* Jon Stahl
* Alexander Limi
* Stephen McMahon
4
5. Part 1 - working with packages
Part 2 - working with zc.buildout
Part 3 - application lifecycle
5
23. Problems with packaging ?
#1 PyPI == SPOF
#2 packages need privacy sometimes
#3 plone.org/products is *dying*
23
23
24. Solutions ?
#1 PyPI == SPOF
Make a PyPI mirror
#2 packages need privacy sometimes
Run you own private PyPI
#3 plone.org/products is *dying*
Make plone.org/products PyPI compatible
24
24
25. #1 Make a PyPI mirror : Smart mirroring
easy_install collective.eggproxy
25
25
27. In zc.buildout : the index option
[buildout]
index = http://my.mirror:8888
all calls will go through the proxy
the mirror is filled on-demand
27
27
28. #2 Run your own private PyPI
PloneSoftwareCenter !
28
28
44. 5 hours in 2006
install python extra packages
get zope
install zope
create an instance
get extra products
read extra products doc
install extra products dependency
install extra products
doesn’t work
ahhh right, install python-ldap
checkout products in development
doesn’t work
ahhh right, wrong python-ldap version
start to work 44
44
55. zc.buildout best practices
#1 use the same layout for all your projects
#2 make sure all developers have the same environment
#3 use one cfg per target
55
55
56. #1 same layout for all projects
project1
docs
buildout
packages
releases
project2
docs
buildout
packages
releases
....
56
56
59. #2 make sure all developers have the same environment
Warning
Plone buildouts are source based
Windows developers
Get my Windows installer : python2.4.4-win32.zip
Google “An installer for a buildout-ready Windows”
59
59
60. #3 use one cfg per target
Typical buildout layout uses the extends feature
buildout.cfg
dev.cfg (extends buildout.cfg)
prod.cfg (extends buildout.cfg)
+ bootstrap.py
60
60
61. buildout.cfg :
[buildout]
parts =
one
two
[buildout]
parts =
dev.cfg : one
two
[buildout]
three
extends = buildout.cfg
develop =
parts =
...
three
develop =
...
61
61
62. demo
creating a fresh Plone 3 buildout (Paste)
adding the dev.cfg
hooking a new development package
adding a prod.cfg
62
62
64. end of part 2
questions ?
#1 use the same layout
for all projects
#2 make sure all developers
have the same environment
#3 use one cfg per target
64
64
68. Releasing packages
for package in packages:
raise the version
edit CHANGES.txt
create a branch (svn)
go to that branch
remove the dev tag (setup.cfg)
release it with “mregister sdist mupload -r somehwere”
release it with “mregister sdist mupload -r somehwereelse”
68
68
69. Releasing packages with collective.releaser:
for package in packages:
python setup.py release
69
69
73. Release the buildout
What packages should be frozen ?
- recipes
- your released packages
- exceptions (security fixes, major bug fixes)
73
73
74. Release the buildout
authentication: use lovely.buildouthttp
[buildout]
...
extensions=lovely.buildouthttp
...
repository:http://my-company.com/products
HOME/.buildout/.httpauth
pypi,http://my-company.com/products,tarek,hahaha
pypi,http://plone.org/products,tarek,hahaha
74
74
75. Release the buildout -> project layout
project
...
buildout
packages
release/0.1 <- tag for the buildout
$ cd buildout
$ svn cp . http://somewhere/releases/0.1
75
75
76. Release the buildout with collective.releaser :
with project_release
$ cd buildout
$ project_release
What version you are releasing ? 0.1
Added version file.
76
76
77. Build your release distribution
- bin/buildout on target system
- remove some stuff
- offline mode to ‘true’
- tar -czvf release-0.1.tgz release/0.1
77
77
78. Build your distribution with collective.releaser
with project_deploy
$ svn co http://somesvn/my_projet/releases/0.1 project
$ cd project
$ project_deploy prod.cfg
78
78