Takayuki Shimizukawa presented an introduction to using Sphinx and docstrings to generate documentation from Python source code. Key points included setting up Sphinx with the autodoc, autosummary, and doctest extensions to automatically generate API documentation and test code examples from docstrings. Writing informative docstrings with parameter and return type information as well as code examples allows Sphinx to generate detailed, easy to understand documentation from Python modules, functions and methods.
S13_レガシー ID 管理者でも分かる Verifiable Credentials のセッション [Microsoft Japan Digital D...日本マイクロソフト株式会社
Microsoft Corporation
Senior Program Manager
兒玉 雄介
従来の Identity 管理者にとって「分散型 ID」は不思議の世界であり、多要素認証の展開もままならないのにさらに新しい Azure AD Verifiable Credential なんて学ぶ余裕がない! という状況だと思います。安心してください、皆さんは独りではありません。このセッションでは従来型の Identity 管理者の皆さんに親しみやすい用語や例を用いて DID/VC について解説していきます。
【Microsoft Japan Digital Daysについて】
Microsoft Japan Digital Days は、お客様が競争力を高め、市場の変化に迅速に対応し、より多くのことを達成することを目的とした、日本マイクロソフトがお届けする最大級のデジタル イベントです。4 日間にわたる本イベントでは、一人一人の生産性や想像力を高め、クラウド時代の組織をデザインするモダンワークの最新事例や、変化の波をうまく乗り切り、企業の持続的な発展に必要なビジネスレジリエンス経営を支えるテクノロジの最新機能および、企業の競争優位性に欠かせないクラウド戦略のビジョンなどデジタル時代に必要な情報をお届けいたしました。(2021年10月11日~14日開催)
This presentation illustrates how the video engineers in ABEMA practice reliability engineering in video QoS/QoE. It is basically done by measuring SLIs and discovering issues with SLOs. RUM helps us measure SLIs and decide SLOs, and STM helps us aggressively discover issues that are hard to detect with RUM only.
S13_レガシー ID 管理者でも分かる Verifiable Credentials のセッション [Microsoft Japan Digital D...日本マイクロソフト株式会社
Microsoft Corporation
Senior Program Manager
兒玉 雄介
従来の Identity 管理者にとって「分散型 ID」は不思議の世界であり、多要素認証の展開もままならないのにさらに新しい Azure AD Verifiable Credential なんて学ぶ余裕がない! という状況だと思います。安心してください、皆さんは独りではありません。このセッションでは従来型の Identity 管理者の皆さんに親しみやすい用語や例を用いて DID/VC について解説していきます。
【Microsoft Japan Digital Daysについて】
Microsoft Japan Digital Days は、お客様が競争力を高め、市場の変化に迅速に対応し、より多くのことを達成することを目的とした、日本マイクロソフトがお届けする最大級のデジタル イベントです。4 日間にわたる本イベントでは、一人一人の生産性や想像力を高め、クラウド時代の組織をデザインするモダンワークの最新事例や、変化の波をうまく乗り切り、企業の持続的な発展に必要なビジネスレジリエンス経営を支えるテクノロジの最新機能および、企業の競争優位性に欠かせないクラウド戦略のビジョンなどデジタル時代に必要な情報をお届けいたしました。(2021年10月11日~14日開催)
This presentation illustrates how the video engineers in ABEMA practice reliability engineering in video QoS/QoE. It is basically done by measuring SLIs and discovering issues with SLOs. RUM helps us measure SLIs and decide SLOs, and STM helps us aggressively discover issues that are hard to detect with RUM only.
이 슬라이드는 2010년 넥슨 개발자 컨퍼런스에서 발표한
"마비노기 영웅전"의 내러티브적 시도에 대한 자료입니다.
동영상이 많이 포함된 슬라이드 특성 때문에
발표 당시에는 웹에 슬라이드를 공유하지 않았었고,
5년이 지난 후에야 뒤늦게 공유하게 되었습니다.
2010년 자료이고, 당시 발표자 두 명은 모두 영웅전 프로젝트를 떠나
현재는 다른 프로젝트를 진행하고 있기 때문에
슬라이드의 내용은 현재의 "마비노기 영웅전"과는
개발 기조와 의도, 방식 등에 차이가 있을 수 있습니다.
IoTシステムで必須となる双方向通信における一般的な考え方と、AWS IoTで実装する際のポイントを説明
Describe the general idea in bidirectional communication which is essential in IoT system and points to implement in AWS IoT
(Using in Interop Tokyo 2016)
Deepfake detection is a critical and evolving field aimed at identifying and mitigating the risks associated with manipulated multimedia content created using artificial intelligence (AI) techniques. Deepfakes involve the use of advanced machine learning algorithms, particularly generative models like Generative Adversarial Networks (GANs), to create highly convincing fake videos, audio recordings, or images that can deceive viewers into believing they are genuine.
One prevalent approach to deepfake detection involves leveraging advancements in computer vision and pattern recognition. Researchers and developers employ sophisticated algorithms to analyze various visual and auditory cues that may indicate the presence of deepfake manipulation. For instance, anomalies in facial expressions, inconsistent lighting and shadows, or unnatural lip sync in videos can be indicative of deepfake content. Additionally, deepfake detectors may examine metadata, such as inconsistencies in timestamps or editing artifacts, to identify alterations in the content's authenticity.
Machine learning plays a central role in deepfake detection, with models being trained on diverse datasets that include both authentic and manipulated content. Supervised learning techniques involve training models on labeled datasets, enabling them to recognize patterns associated with deepfake manipulation. Researchers also explore unsupervised and semi-supervised learning methods, allowing detectors to identify anomalies without explicit labels for every training instance.
As the field progresses, deepfake detectors are increasingly adopting advanced neural network architectures to enhance their accuracy. Ensembling multiple models, each specialized in detecting specific types of manipulations, is another strategy employed to improve overall detection performance. Furthermore, the integration of explainable AI techniques enables better understanding of the detection process and provides insights into the features contributing to the decision-making process of the models.
Despite these advancements, deepfake detection remains a challenging task due to the constant evolution of deepfake generation techniques. Adversarial training, where detectors are trained on data that includes adversarial examples, is one method to improve robustness against sophisticated manipulation attempts. Continuous research efforts are required to stay ahead of emerging deepfake technologies and to develop detectors capable of identifying novel manipulation methods.
In conclusion, deepfake detection is a multidimensional challenge that requires a combination of computer vision, machine learning, and data analysis techniques. Researchers and practitioners are actively developing and refining methods to detect manipulated content by examining visual and auditory cues, leveraging machine learning models, and staying vigilant against evolving deepfake technologies. As the threat landscape evolves, ongoing innovati
어떤 세대보다 일찍이 재테크에 관심을 보이며
주체적으로 미래를 준비하는 Z세대!
-
최근 Z세대는 이전의 1020세대와는 다르게
적극적인 금융활동을 펼치며, 금융 시장의 중요 타겟으로 부상했습니다.
-
Z세대 금융 소비자만의 특징,
Z세대 타겟 금융 산업 트렌드와 마케팅 트렌드를 분석했습니다.
Sphinx autodoc - automated api documentation - PyCon.KR 2015Takayuki Shimizukawa
Using the automated documentation feature of Sphinx, you can make with ease the extensive documentation of Python program.
You just write python function documents (docstrings), Sphinx organizes them into the document, can be converted to a variety of formats.
In this session, I'll explain a documentation procedure that uses with sphinx autodoc and autosummary extensions.
이 슬라이드는 2010년 넥슨 개발자 컨퍼런스에서 발표한
"마비노기 영웅전"의 내러티브적 시도에 대한 자료입니다.
동영상이 많이 포함된 슬라이드 특성 때문에
발표 당시에는 웹에 슬라이드를 공유하지 않았었고,
5년이 지난 후에야 뒤늦게 공유하게 되었습니다.
2010년 자료이고, 당시 발표자 두 명은 모두 영웅전 프로젝트를 떠나
현재는 다른 프로젝트를 진행하고 있기 때문에
슬라이드의 내용은 현재의 "마비노기 영웅전"과는
개발 기조와 의도, 방식 등에 차이가 있을 수 있습니다.
IoTシステムで必須となる双方向通信における一般的な考え方と、AWS IoTで実装する際のポイントを説明
Describe the general idea in bidirectional communication which is essential in IoT system and points to implement in AWS IoT
(Using in Interop Tokyo 2016)
Deepfake detection is a critical and evolving field aimed at identifying and mitigating the risks associated with manipulated multimedia content created using artificial intelligence (AI) techniques. Deepfakes involve the use of advanced machine learning algorithms, particularly generative models like Generative Adversarial Networks (GANs), to create highly convincing fake videos, audio recordings, or images that can deceive viewers into believing they are genuine.
One prevalent approach to deepfake detection involves leveraging advancements in computer vision and pattern recognition. Researchers and developers employ sophisticated algorithms to analyze various visual and auditory cues that may indicate the presence of deepfake manipulation. For instance, anomalies in facial expressions, inconsistent lighting and shadows, or unnatural lip sync in videos can be indicative of deepfake content. Additionally, deepfake detectors may examine metadata, such as inconsistencies in timestamps or editing artifacts, to identify alterations in the content's authenticity.
Machine learning plays a central role in deepfake detection, with models being trained on diverse datasets that include both authentic and manipulated content. Supervised learning techniques involve training models on labeled datasets, enabling them to recognize patterns associated with deepfake manipulation. Researchers also explore unsupervised and semi-supervised learning methods, allowing detectors to identify anomalies without explicit labels for every training instance.
As the field progresses, deepfake detectors are increasingly adopting advanced neural network architectures to enhance their accuracy. Ensembling multiple models, each specialized in detecting specific types of manipulations, is another strategy employed to improve overall detection performance. Furthermore, the integration of explainable AI techniques enables better understanding of the detection process and provides insights into the features contributing to the decision-making process of the models.
Despite these advancements, deepfake detection remains a challenging task due to the constant evolution of deepfake generation techniques. Adversarial training, where detectors are trained on data that includes adversarial examples, is one method to improve robustness against sophisticated manipulation attempts. Continuous research efforts are required to stay ahead of emerging deepfake technologies and to develop detectors capable of identifying novel manipulation methods.
In conclusion, deepfake detection is a multidimensional challenge that requires a combination of computer vision, machine learning, and data analysis techniques. Researchers and practitioners are actively developing and refining methods to detect manipulated content by examining visual and auditory cues, leveraging machine learning models, and staying vigilant against evolving deepfake technologies. As the threat landscape evolves, ongoing innovati
어떤 세대보다 일찍이 재테크에 관심을 보이며
주체적으로 미래를 준비하는 Z세대!
-
최근 Z세대는 이전의 1020세대와는 다르게
적극적인 금융활동을 펼치며, 금융 시장의 중요 타겟으로 부상했습니다.
-
Z세대 금융 소비자만의 특징,
Z세대 타겟 금융 산업 트렌드와 마케팅 트렌드를 분석했습니다.
Sphinx autodoc - automated api documentation - PyCon.KR 2015Takayuki Shimizukawa
Using the automated documentation feature of Sphinx, you can make with ease the extensive documentation of Python program.
You just write python function documents (docstrings), Sphinx organizes them into the document, can be converted to a variety of formats.
In this session, I'll explain a documentation procedure that uses with sphinx autodoc and autosummary extensions.
Here’s a step-by-step guide to implement Flask JWT Authentication with an example. Clone the flask-jwt authentication github repo and play around with the code
These questions will be a bit advanced level 2sadhana312471
These questions will be a bit advanced(Intermediate) in terms of Python interview.
This is the continuity of Nail the Python Interview Questions.
The fields that these questions will help you in are:
• Python Developer
• Data Analyst
• Research Analyst
• Data Scientist
A presentation discussing the benefits of VBScripting in today's even with the advent of PowerShell. This presentation also discusses building an HTA for your VBScripts and accessing WMI and AD information.
A presentation I gave on September 26 at the Melbourne Symfony developers group on using Environment Variables (envvars) in Symfony and managing secrets in your PHP applications.
For more information on these subjects, check out the supporting piece I wrote: https://samjarrett.com.au/swipe-right
A Check of the Open-Source Project WinSCP Developed in Embarcadero C++ BuilderAndrey Karpov
We regularly check open-source C/C++ projects, but what we check are mostly projects developed in the Visual Studio IDE. For some reason, we haven't paid much attention to the Embarcadero C++ Builder IDE. In order to improve this situation, we are going to discuss the WinSCP project I have checked recently.
P.S. C++ Builder support in PVS-Studio had been dropped after version 5.20. If you have any questions, feel free to contact our support.
Easy contributable internationalization process with Sphinx @ PyCon APAC 2016Takayuki Shimizukawa
Sphinx can extract paragraphs from sphinx document and store them into gettext format translation catalog files.
Gettext format translation catalog is easy to translate from one language to other languages.
Also Sphinx support internationalization by using such catalog files.
You can use your favorite editors or services to translate your sphinx docs.
In this slide, I'll explain 3 things; (1) entire process to translate sphinx docs. (2) automation mechanism for the process. (3) tips for writing docs and translating.
PythonPH 2023.09 https://www.meetup.com/pythonph/events/296081160/
I will present the target audience, and what I especially want to share with you all.
https://djangocongress.jp/#talk-10
OpenTelemetryは、複数のプロセス、システムをまたがってアプリケーションの処理を追跡する分散トレースの仕組みを提供するフレームワークで、2021年春に1.0.0がリリースされました。このライブラリを活用し、Djangoアプリおよび周辺システムの処理を追跡する方法について紹介します。
Google Slide(スライド内のリンクをクリックできます)
https://docs.google.com/presentation/d/e/2PACX-1vRtqRQ6USDeV32_aTPjSaNXpKdn5cbitkmiX9ZfgwXVE-mh74I4eICFOB8rWGz0LPUIEfXn3APRKcrU/pub
コード
https://github.com/shimizukawa/try-otel/tree/20221112-djangocongressjp2022
Let's trace web system processes with opentelemetry djangocongress jp 2022
https://2021.pycon.jp/time-table/?id=273396
Webアプリ開発とデータベースマイグレーションには密接な関係があり、Pythonでよく採用されるDjangoやSQLAlchemyには、DBのスキーマを変更するマイグレーション機能があります。一般的に、プログラムを実装するときはリポジトリでブランチを作りそれぞれのブランチで実装作業を進めます。Webアプリの開発でも同様ですが、各ブランチでDBスキーマを変更する場合には注意が必要です。例えば、複数のブランチで同じテーブルのカラムを追加して使いたい場合や、DBスキーマの変更が競合する場合は、ブランチのマージ時に競合してしまいます。多くの機能を並行開発したり、マージするまでの期間が長い場合には、このような競合が増えてしまいます。
このトークでは、Djangoを例に、データベースマイグレーションの仕組みから、実際の開発現場で発生したトラブルとその解決方法について紹介します。
Migration strategies for parallel development of web applications
Easy contributable internationalization process with Sphinx @ pyconmy2015Takayuki Shimizukawa
Sphinx can extract paragraphs from sphinx document and store them into gettext format translation catalog files.
Gettext format translation catalog is easy to translate from one language to other languages.
Also Sphinx support internationalization by using such catalog files.
You can use your favorite editors or services to translate your sphinx docs.
In this session, I'll explain 3 things; (1) entire process to translate sphinx docs. (2) automation mechanism for the process. (3) tips, tricks and traps for writing docs and translating.
Sphinx autodoc - automated API documentation (EuroPython 2015 in Bilbao)Takayuki Shimizukawa
Using the automated documentation feature of Sphinx, you can make with ease the extensive documentation of Python program. You just write python function documents (docstrings), Sphinx organizes them into the document, can be converted to a variety of formats. In this session, I’ll explain a documentation procedure that uses with sphinx autodoc and autosummary extensions.
In this session, I’ll explain a documentation procedure that uses with sphinx autodoc, autosummary, coverage and doctest extensions.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
7. 1. def dumps(obj, ensure_ascii=True):
2. """Serialize ``obj`` to a JSON formatted
``str``.
3. """
4.
5. ...
6. return ...
Line 2,3 is a docstring
You can see the string by "help(dumps)"
Docstring
7
9. What is the reason you do not write docstrings.
I don't know what/where should I write.
Are there some docstring format spec?
It's not beneficial.
I'll tell you a good way to write the docstrings.
9
10. Goal of this session
How to generate a doc from Python source
code.
re-discovering the meaning of docstrings.
10
12. What is Sphinx?
12
Sphinx is a documentation generator
Sphinx generates doc as several output
formats from the reST text markup
Sphinx
reSTreSTreStructuredText
(reST) reST Parser
HTML Builder
ePub Builder
LaTeX Builder texlive
HTML
theme
Favorite Editor
13. The history of Sphinx (short ver.)
13
The father of
Sphinx
Too hard to
maintenance
~2007
Easy to write
Easy to maintenance
2007~
14. Sphinx Before and After
Before
There was no standard ways to write documents
Sometime, we need converting markups into other formats
After
Generate multiple output format from single source
Integrated html themes make read docs easier
API references can be integrated into narrative docs
Automated doc building and hosting by ReadTheDocs
14
15. Many docs are written by Sphinx
For examples
Python libraries/tools:
Python, Sphinx, Flask, Jinja2, Django, Pyramid,
SQLAlchemy, Numpy, SciPy, scikit-learn,
pandas, fabric, ansible, awscli, …
Non python libraries/tools:
Chef, CakePHP(2.x), MathJax, Selenium,
Varnish
15
16. Many docs are written by Sphinx
For examples
Python libraries/tools:
Python, Sphinx, Flask, Jinja2, Django, Pyramid,
SQLAlchemy, Numpy, SciPy, scikit-learn,
pandas, fabric, ansible, awscli, …
Non python libraries/tools:
Chef, CakePHP(2.x), MathJax, Selenium,
Varnish
16
17. Sphinx extensions (built-in)
Sphinx provides these extensions to support
automated API documentation.
sphinx.ext.autodoc
sphinx.ext.autosummary
sphinx.ext.doctest
sphinx.ext.coverage
Sphinx
reST Parser
HTML Builder
ePub Builder
LaTeX Builder
docutils
autosummary
autodoc
doctest
coverage
17
19. $ pip install sphinx
Your code and sphinx should be in single
python environment.
Python version is also important.
How to install Sphinx
19
20. $ cd /path/to/your-code
$ sphinx-quickstart doc -m
...
Project name: Deep thought
Author name(s): Mice
Project version: 0.7.5
...
...
Finished
"-m" to generate minimal Makefile/make.bat
-m is important to introduce this session easily.
How to start a Sphinx project
Keep pressing ENTER key
20
Create a doc directory
21. $ cd doc
$ make html
...
Build finished. The HTML pages are in _build/html.
"make html" command
generates html files into
_build/html.
make html once
21
24. $ tree /path/to/your-code
+- deep_thought
| +- __init__.py
| +- api.py
| +- calc.py
| +- utils.py
+- doc
| +- _build/
| | +- html/
| +- _static/
| +- _template/
| +- conf.py
| +- index.rst
| +- make.bat
| +- Makefile
+- setup.py
1. import os
2. import sys
3. sys.path.insert(0, os.path.abspath('..'))
4. extensions = [
5. 'sphinx.ext.autodoc',
6. ]
setup autodoc extension
doc/conf.py
24
Line-3: add your library path to
import them from Sphinx autodoc.
Line-5: add 'sphinx.ext.autodoc' to
use the extension.
25. Add automodule directive to your doc
1. Deep thought API
2. ================
3.
4. .. automodule:: deep_thought.utils
5. :members:
6.
1. "utility functions"
2.
3. def dumps(obj, ensure_ascii=True):
4. """Serialize ``obj`` to a JSON formatted ``str``.
5. """
6. ...
doc/index.rst
25
deep_thought/utils.py
Line-4: automodule directive import specified
module and inspect the module.
Line-5: :members: option will inspects all members
of module not only module docstring.
26. $ cd doc
$ make html
...
Build finished. The HTML pages are in _build/html.
make html
26
27. How does it work?
autodoc directive generates intermediate reST
internally:
1. Deep thought API
2. ================
3.
4. .. py:module:: deep_thought.utils
5.
6. utility functions
7.
8. .. py:function:: dumps(obj, ensure_ascii=True)
9. :module: deep_thought.utils
10.
11. Serialize ``obj`` to a JSON formatted :class:`str`.
doc/index.rst
Intermediate
reST
27
28. $ make html SPHINXOPTS=-vvv
...
...
[autodoc] output:
.. py:module:: deep_thought.utils
utility functions
.. py:function:: dumps(obj, ensure_ascii=True)
:module: deep_thought.utils
Serialize ``obj`` to a JSON formatted :class:`str`.
You can see the reST with -vvv option
28
29. Take care!
Sphinx autodoc import your code
to get docstrings.
It means autodoc will execute code
at module global level.
29
30. Danger code
1. import os
2.
3. def delete_world():
4. os.system('sudo rm -Rf /')
5.
6. delete_world() # will be executed at "make html"
danger.py
30
31. execution guard on import
1. import os
2.
3. def delete_world():
4. os.system('sudo rm -Rf /')
5.
6. delete_world() # will be executed at "make html"
danger.py
1. import os
2.
3. def delete_world():
4. os.system('sudo rm -Rf /')
5.
6. if __name__ == '__main__':
7. delete_world() # doesn't execute at "make html"
safer.py
Execution guard
31
32. execution guard on import
1. import os
2.
3. def delete_world():
4. os.system('sudo rm -Rf /')
5.
6. delete_world() # will be executed at "make html"
danger.py
1. import os
2.
3. def delete_world():
4. os.system('sudo rm -Rf /')
5.
6. if __name__ == '__main__':
7. delete_world() # doesn't execute at "make html"
safer.py
Execution guard
32
33. "Oh, I can't understand the type of arguments
and meanings even reading this!"
33
Lacking necessary information
34. 1. def dumps(obj, ensure_ascii=True):
2. """Serialize ``obj`` to a JSON formatted
3. :class:`str`.
4.
5. :param dict obj: dict type object to serialize.
6. :param bool ensure_ascii: Default is True. If
7. False, all non-ASCII characters are not ...
8. :return: JSON formatted string
9. :rtype: str
10. """
http://sphinx-doc.org/domains.html#info-field-lists
"info field lists" for arguments
deep_thought/utils.py
34
35. def dumps(obj, ensure_ascii=True):
"""Serialize ``obj`` to a JSON formatted :class:`str`.
:param dict obj: dict type object to serialize.
:param bool ensure_ascii: Default is True. If
False, all non-ASCII characters are not ...
:return: JSON formatted string
:rtype: str
"""
...
"info field lists" for arguments
deep_thought/utils.py
35
36. Cross-reference to functions
1. Examples
2. ==========
3.
4. This is a usage of :func:`deep_thought.utils.dumps`
blah blah blah. ...
examples.py
reference
(hyper link)
36
38. Code example in a docstring
1. def dumps(obj, ensure_ascii=True):
2. """Serialize ``obj`` to a JSON formatted
3. :class:`str`.
4.
5. For example:
6.
7. >>> from deep_thought.utils import dumps
8. >>> data = dict(spam=1, ham='egg')
9. >>> dumps(data)
10. '{spam: 1, ham: "egg"}'
11.
12. :param dict obj: dict type object to serialize.
13. :param bool ensure_ascii: Default is True. If
14. False, all non-ASCII characters are not ...
deep_thought/utils.py
38
doctest
block
You can copy & paste the red lines
from python interactive shell.
40. You can expect that developers will update code
examples when the interface is changed.
We expect ...
1. def dumps(obj, ensure_ascii=True):
2. """Serialize ``obj`` to a JSON formatted
3. :class:`str`.
4.
5. For example:
6.
7. >>> from deep_thought.utils import dumps
8. >>> data = dict(spam=1, ham='egg')
9. >>> dumps(data)
10. '{spam: 1, ham: "egg"}'
The code example is
very close from
implementation!!
deep_thought/utils.py
40
44. $ make doctest
...
Document: api
-------------
********************************************************
File "api.rst", line 11, in default
Failed example:
dumps(data)
Expected:
'{spam: 1, ham: "egg"}'
Got:
'to-be-implemented'
...
make: *** [doctest] Error 1
Result of "make doctest"
44
Result of doctest
57. make coverage and check the result
$ make coverage
...
Testing of coverage in the sources finished, look at the
results in _buildcoverage.
$ ls _build/coverage
c.txt python.txt undoc.pickle
1. Undocumented Python objects
2. ===========================
3. deep_thought.utils
4. ------------------
5. Functions:
6. * egg
_build/coverage/python.txt
This function doesn't have a doc!
57
58. CAUTION!
1. Undocumented Python objects
2. ===========================
3. deep_thought.utils
4. ------------------
5. Functions:
6. * egg
python.txt
$ make coverage
...
Testing of coverage in the sources finished, look at the
results in _buildcoverage.
$ ls _build/coverage
c.txt python.txt undoc.pickle
The command always return ZERO
coverage.xml is not exist
reST format for whom?
58
61. Why don't you write docstrings?
I don't know what/where should I write.
Let's write a description, arguments and doctest blocks
at the next line of function signature.
Are there some docstring format spec?
Yes, you can use "info field list" for argument spec and
you can use doctest block for code example.
It's not beneficial.
You can use autodoc, autosummary, doctest and
coverage to make it beneficial.
61
64. Options for autodoc
:members: blah
To document just specified members. Empty is ALL.
:undoc-members: ...
To document members which doesn't have docstring.
:private-members: ...
To document private members which name starts with
underscore.
:special-members: ...
To document starts with underscore underscore.
:inherited-members: ...
To document inherited from super class.
64
71. Translation into other languages
$ make gettext
...
Build finished. The message catalogs are in
_build/gettext.
$ sphinx-intl update -p _build/gettext -l es
#: ../../../deep_thought/utils.pydocstring of deep_thought.
msgid "Serialize ``obj`` to a JSON formatted :class:`str`."
msgstr "Serializar ``obj`` a un formato JSON :class:`str`."
msgid "For example:"
msgstr "Por ejemplo:"
locale/es/LC_MESSAGES/generated.po
language = 'es'
locale_dirs = ['locale']
conf.py
$ make html
...
Build finished. The HTML pages are in _build/html.
71
(ここの文字サイズ大きくする、マウスをレーザーポインタにする)
Hi everyone. Thank you for coming my session.
This session title is: Sphinx autodoc – automated API documentation
こんにちは
At first, Let me introduce myself.
My name is Takayuki Shimizukawa came from Japan.
I do 3 opensource works.
1. Sphinx co-maintainer since the end of 2011.
2. organize Sphinx-users.jp users group in Japan.
3. member of PyCon JP Committee.
And I'm working for BePROUD.
We develop web applications for business customers with using Django, Pyramid, SQLAlchemy, Sphinx and other python related tools.
Before my main presentation, I'd like to introduce "PyCon JP 2015" in Tokyo Japan.
We will held the event in this October.
Registration is opened. Please join us.
Anyway.
Sphinx autodoc. This is a main topic of this session.
Autodoc is a feature that is automatic document generation from source code.
Autodoc uses the function definitions and also uses docstring of the such functions.
Before we jump in the main topic, I want to know how many people know the docstring, and how many people writing the docstring.
Docstring is a feature of Python.
Do you know the docstring? How many people know that?
Please raise your hand.
10, 20, 30.. 55 hands. Thanks.
Hum, It might be a minor feature of Python.
OK, This red lines is a docstring.
Docstring describe a way of using the function that is written at the first line of the function body.
When you type "help(dumps)" in a Python interactive shell, you will get the docstring.
Have you written API docs as docstrings?
Please raise again.
10, 20.. 22.5 hands.
Thanks.
It's very small number of hands.
But some people write a docstrings.
So, what is the reason you do not write them?
Someone would say,
* I don't know what/where should I write them.
* Are there some specific docstring formats?
* It's not beneficial.
For example, sometimes docstrings are not updated even the function's behavior is changed.
Those opinions are understandable
So then, I'll explain you how to write the docstrings.
Goal of this session.
First one is, * How to generate a doc from Python source code.
Second one is, * re-discovering the meaning of docstrings.
OK, let's move forward.
Sphinx autodoc is the most useful way to activate docstrings.
So, before talking about docstrings, I'll introduce a basic of sphinx and How to Setup it.
What is Sphinx?
Sphinx is a documentation generator.
Sphinx generates doc as several output formats from reStructuredText markup that is an extensible.
(ポインタでinputとoutputを指す)
The history of Sphinx.
This man, Georg Brandl is the father of Sphinx.
(クリック)
Until 2007, python official document was written by LaTeX.
But, it's too hard to maintenance.
Georg was trying to change such situation.
(クリック)
So then, he created the Sphinx in 2007.
The sphinx provides ease of use and maintainability for the Python official document.
Sphinx before and after.
Before
There was no standard ways to write documents. One of example is a Python official document. It was written by LaTeX and several some python scripts jungle.
And, Sometime, we need converting markups into other formats
Since sphinx has been released,
* We can generate more multiple output format from single source.
* Integrated html themes make read docs easier.
* API references can be integrated into narrative docs.
* Automated doc building and hosting by ReadTheDocs service.
Nowadays, sphinx has been used by these libraries and tools.
Python libraries/tools: Python, Sphinx, Flask, Jinja2, Django, Pyramid, SQLAlchemy, Numpy, SciPy, scikit-learn, pandas, fabric, ansible, awscli, …
And Non python library/tools also using Sphinx for them docs: Chef, CakePHP(2.x), MathJax, Selenium, Varnish
Nowadays, sphinx has been used by these libraries and tools.
Python libraries/tools: Python, Sphinx, Flask, Jinja2, Django, Pyramid, SQLAlchemy, Numpy, SciPy, scikit-learn, pandas, fabric, ansible, awscli, …
And Non python library/tools also using Sphinx for them docs: Chef, CakePHP(2.x), MathJax, Selenium, Varnish
Sphinx provides these extensions to support automated API documentation.
sphinx.ext.autodoc
sphinx.ext.autosummary
sphinx.ext.doctest
sphinx.ext.coverage
Autodoc is the most important feature of sphinx.
Almost python related libraries are using the autodoc feature.
OK, let's setup a sphinx project for this code, for example.
This library will be used in place of your code to explain autodoc feature.
The library name is "Deep Thought".
This is a structure of the library.
The library has three modules: api.py, calc.py and utils.py.
Second box is a first lines of program code in utils.py.
If you don’t have sphinx installation in your environment, you need to install the Sphinx by this command.
pip install sphinx
Please note that your source code and sphinx should be installed in single python environment.
Python version is also important. If you install Sphinx into Python3 environment in spite of your code is written in Python2, autodoc will emit exception to import your Python2 source code.
Once you installed the sphinx, you can generate your documentation scaffold by using "sphinx-quickstart" command.
Then interactive wizard is invoked and it requires Project name, Author name and Project version.
The wizard also ask you many questions, but, DON'T PANIC, Usually, all you need is keep pressing Enter key.
Note that, -m option is important.
If you invoke without the option, you will get a "hard-coded make targets" Makefile, that will annoy you. And my presentation slide stand on this option.
This option is introduced since Sphinx-1.3.
And -m option will become default from Spihnx-1.5.
So, type "make html" in doc directory to generate html output.
You can see the output in _build/html directory.
Now you can see the directories/files structure, like this.
Library files under deep_thought directory.
Build output under doc directory.
Scaffold files under doc directory.
In particular, you will see well utils.py, conf.py and index.rst in this session.
Now we ready to go.
Generate API docs from your python source code.
Setup sphinx autodoc extension.
This is a conf.py file in your sphinx scaffold.
What is important is the third and fifth lines.
Line-3rd: add your library path to import them from Sphinx autodoc.
Line-5th: add 'sphinx.ext.autodoc' to use the extension.
Next, let's specify the modules you want to document.
Add automodule directive to your doc.
First box is a utils.py file that is a part of deep_thought example library.
Second box is a reST file. You can see the automodule usage in this box.
automodule is a sphinx directive syntax that is provided by autodoc extension to generate document.
Let's see the second box.
(クリック)
Line-4th: automodule directive imports specified module and inspect it.
In this case, deepthought.utils module will be imported and be inspected.
Line-5th: :members: option will inspects all members of module not only just module docstring.
OK, we are now all ready. Let's invoke "make html" again.
So, as a result of "make html", you can get a automatically generated document from .py file.
Internally, automodule directive inspects your module and render the function signature, arguments and docstring.
How does it work?
autodoc directive generates intermediate reST, like this.
Actually intermediate file is not generated in your filesystem, just created it on memory.
If you want to see the intermediate reST lines, you can use -vvv option, like this.
As you see, automodule directive is replaced with concrete documentation contents.
But, please take care.
Sphinx autodoc import your code to get docstrings.
It means autodoc will execute code at module global level.
Let me introduce a bad case related to this.
This module will remove all your files.
danger.py was designed as command-line script instead of "import from other module".
If you tried to document with using autodoc, delete_world function will be called.
Consequence of this, "make html" will destroy all your files.
On the other hand, safer.py (lower block) using execution guard.
It's very famous python's idiom.
Because of the execution guard, your files will not be removed by make html.
As a practical matter, you shouldn't try to document your setup.py for your package with using autodoc.
Now let's return to the docstring and its output.
This output lacks necessary information.
It is the information of the argument.
If you are looking for the API reference, and you find it, you will say;
"Oh, I can't understand the type of arguments and meanings even reading this!"
In this case, you can use "info field list" syntax for describing arguments.
A real docstring should have descriptions for each function arguments like this.
These red parts are special version of "field list" that called "info field lists".
The specification of info field lists is described at the URL page.
Info field lists is rendered as this.
The output is quite nice.
So, you will say; "Oh, I can understand it!", maybe.
Cross-reference to functions.
You can easily make cross-reference from other location to the dumps function.
Of course, the cross-references beyond the pages.
So far, I introduced the basics of autodoc.
Following subject is: Detect deviations of the implementation and document.
By using doctest.
I think good API has a good document that illustrate usages of the API by using code example.
If doc has code example, you can grasp the API usage quickly and exactly.
I add a code example, such 4 red lines to docstring earlier.
(クリック)
It's called "doctest block".
Obviously, this look is an interactive content of the python interactive shell.
Actually, you can copy & paste the red lines from python interactive shell.
After make html,
You can get a syntax highlighted doctest block, like this.
Library users can grasp the API usage quickly and exactly.
And also users can try out it easily.
And the point of view from library developers,
code example is very close from implementation!
We can expect that library developers will update code examples when the interface is changed by themselves.
... Really?
Sorry, I don't believe it.
If the code examples was very close from implementation, developers wouldn't mind to it.
Because developers have no spare time to read the implicit rules from the code.
Explicit is better than implicit for us.
OK, let's use the doctest builder to detect deviations of the implementation and documentation.
To use doctest builder, you need to add sphinx doctest extension to conf.py, like this.
Line-5: add 'sphinx.ext.doctest'
With only this, you are ready to use the doctest builder.
OK, Let's invoke "make doctest" command.
(OK, Let's invoke "make doctest" command.)
After that, you can see the dumps function will provide us different result from the expected one.
Expected one is: '{spam: 1, ham: "egg"}'
Actual one is: 'to-be-implemented'
it is not implemented properly yet.
Anyway, by using the doctest builder, it show us differences in implementation and sample code in the documentation.
Actually, if your UnitTest also contains doctests, you don't need to do this by Sphinx.
However, if you don't write the UnitTest, "make doctest" would be a good place to start.
Listing APIs automatically with using autosummary.
As already noted, autodoc is very useful.
However, if you have a lot of function of a lot of modules, ...
And You want to have individual pages for each modules, you need to prepare many reST files for each modules.
(クリック)
This box is for utils.py.
In this case you should also prepare such .rst files for api module and calc module.
If you have 100 modules, you should prepare 100 .rst files.
As you see, each reST files have just 4 lines.
You can get them by repeating copy & paste & modify bit.
However ... I believe that you don't want to repeat that, like this.
Don't Repeat Yourself.
OK, let's use the autosummary extension to avoid such a boring tasks.
Setup sphinx autosummary extension.
This is your conf.py again.
Line-6th: to add 'sphinx.ext.autosummary' to use the extension.
Line-8th: to use 'members' option for each autodoc related directives.
Line-9th: to generate reST files what you will specify with using autosummary directive.
//メモ: 9th to invoke 'sphinx-apidoc' internally. Default is False, in that case, you need to invoke 'sphinx-apidoc' by hand.
You can use autosummary directive in your reST files as you see.
This sample uses autosummary directive and toctree option.
The :toctree: option is a directory location of intermediate files that will be generated by autosummary.
And contents of autosummary directive, deep_thought.api, calc and utils, are module names you want to document.
Thereby the autosummary, you will get 100 intermediate .rst files if you have 100 modules.
After run "make html" command again.
Finally, you can get each documented pages without forcing troublesome simple operations.
Additionally, "autosummary" directive you wrote was generating table of contents that linking each module pages.
Discovering undocumented APIs by using coverage extension.
So far, we've automated the autodoc by using autosummary.
In addition, now you can also find deviation of documents and implementation by using the doctest.
But, how do you find a function that is not writing a docstring, at all?
For such situation, we can use coverage extension to find undocumented functions, classes, methods.
To use coverage extension, you should setup coverage extension to conf.py.
This is your conf.py again.
Line-7th: to add 'sphinx.ext.coverage' to use the extension.
That's all!
Let's invoke "make coverage" command.
After that, you can get a result of coverage measurement.
The coverage report is recorded in "_build/coverage/python.txt" that contains undocumented functions, classes and modules.
As you see, you can get the undocumented function name.
However, please take care that;
Command always return 0
Then you can't distinguish the presence or absence of the undocumented function by the return code.
IMO, it's fair enough because coverage command shouldn't fail regardless whether coverage is perfect or not.
However, unfortunately, "make coverage" also unsupported to generate coverage.xml for Jenkins or some CI tools.
As conclusion of this, you can discover the undocumented functions, but you can't integrate the information to a CI tools.
Sorry for inconvenience.
And we are waiting for your contribution to solve the problem.(bow)
Let's review the reasons for not writing a docstring that was introduced at the beginning.
I don't know what/where should I write.
Let's write a description, arguments and doctest blocks at the next line of function signature.
Are there some docstring format spec?
Yes, you can use "info field list" for argument spec and you can use doctest block for code example.
It's not beneficial.
You can use autodoc, autosummary, doctest and coverage to make it beneficial.
I think these reasons are resolved by using sphinx autodoc features, aren't you?
Let's write docstrings, and use autodoc!
At the end, I'd like to introduce some of the tips.
First one is, Options.
Options for autodo.
:members: blah
To document just specified members. If you specify the option without parameter, it means ALL.
:undoc-members: ... To document members which doesn't have docstring. If you specify the option without parameter, all undocumented members are rendered.
:private-members: ... To document private members which name starts with underscore.:special-members: ... To document starts with underscore underscore.
:inherited-members: ... To document inherited from super class.
Please refer to sphinx reference for the detail of options.
Second one is Directives for Web API.
It's sphinxcontrib-httpdomain.
sphinxcontrib-httpdomain 3rd-party extension provides http domain to generate WebAPI doc.
As you see, you can use get directive.
Httpdomain also provides:
Other http related directives
"http" syntax highlighter
It generates nice WebAPI reference page and well organized WebAPI index page.
Httpdmain also contains sphinxcontrib.autohttp extension that support Flask, Bottle and Tornado WAF to document WebAPI methods automatically by using reflection.
The 3rd one is directives for diagram.
It's blockdiag series.
blockdiag provides block style diagram from text notation.
The series of blockdiag; seqdiag, actdiag, nwdiag, packetdiag, rackdiag are also released.
These are standalone tool, so it works well without sphinx.
And, blockdiag series for sphinx are also released.
I'd like to introduce a sphinxcontrib-seqdiag example.
This is a "sphinxcontrib-seqdiag" example.
"seqdiag" provides sequence style diagram from text notation.
At first, you need installation to use seqdiag directive in your document.
Next, setup extension to your conf.py.
And now, you can use seqdiag directive as you see.
In this example, Request class docstring has seqdiag directive and notation.
Finally, you'll get sequence diagram that is integrated to your documentation.
So, you can describe a usage of API, behavior of API, and so on.
The last one is Document translation.
You can get translated output w/o editing reST and python code.
For that, you can use "make gettext" command that generates gettext style pot files.
"make gettext" extract text from reST file and python source file that referenced by autodoc.
It means, you can translate them into any language without rewriting the original reST files and python source files.
If you are interested, please join my tomorrow session!