Sound on Android is a topic that is rarely covered, which is why I wanted to shed some lights in my experience with the sound management APIs on Android.
So in this talk, we'll touch on how sounds actually work programmatically, and we'll talk about how to play a sound on Android in the most simple way. We will also cover the principle of sound focus, what it is, how and when to use it. We'll see what we can do with MIDI as well. And finally, we'll go deeper in the rabbit hole and get introduced to the lower levels of sounds processing with the help of OpenSL ES and the SuperPowered library.
This document provides instructions for using the iKITMovie animation software. It includes explanations of the interface, timeline, editing tools, and sound/music features. The main sections covered are:
1. An overview of the iKITMovie interface and what each button does.
2. Instructions for capturing images from a webcam, importing images, and adding them to the timeline as frames of animation.
3. Guidance on editing frames, zooming and navigating the timeline, and adding/removing sound effects and music.
4. Directions for playing back animations, exporting finished movies, and uploading videos to YouTube.
The document provides step-by-step explanations to help users learn the basics
The document discusses creating a musical experience for the game Resurface that changes based on how the player interacts with the world. It proposes designing a series of loops that can be modified or played through code to generate a unique soundtrack for each player. This would involve multi-layering music and connecting it to different in-game components, states, and actions so it can vary dynamically during gameplay. One approach described is composing a base loop and dividing instruments into groups that can then be duplicated, altered and substituted in as needed for smooth transitions.
The document discusses editing audio in a game maker demo game. It has simple mechanics where the goal is to destroy blocks by bouncing a ball. The audio can be edited by double clicking sounds to open a menu, then browsing files to replace the current sound with a new one recorded or edited by the user. For example, the breaking block sound was replaced with a metallic sound. Repeating this allows all sounds to be swapped for custom audio.
The document provides information about various games, apps, animations, artworks and design projects. It includes descriptions of elements like environments, characters, UI/UX, logos, icons, animations and gameplay for several mobile and desktop games. It also outlines features for music apps and includes links to videos and app store listings. Additionally, it showcases logos, digital artworks, oil paintings, illustrations and website designs created by the artist.
This document appears to be a catalog listing various baby and toddler toys, including descriptions and product codes. It includes items such as musical mobiles, puppets, teethers, rattles, bath toys, and educational toys that promote motor skills or interactive play. Specifications like dimensions, materials, battery requirements, and quantity are provided for each item.
To program sounds into a game, select the audio file to replace in the game maker sound tab, open the sound properties to browse and select the audio file, then program the audio into the game so it can be played with all other sounds added and exported.
Practicing Agile Accessibility in Large OrganizationsDevOps.com
In a lot of ways, accessibility is talked about and practiced as if it is a hydra dragon. Every time you think you’ve made an improvement, you cut off one head of the accessibility dragon and two heads grow back. In particular, there are a lot of myths out there that you can’t do accessibility in an agile development environment.
A lot of that stems from accessibility being thought of something you do manually, and also often only as user acceptance testing or usability testing. And while that’s an important part of accessibility, there is much more you can do to integrate accessibility into your agile processes to produce accessible products. In this live webinar, we will cover how to practice agile accessibility in large organizations.
The document describes Spencer Fox's process of creating animation for a video game. Some key points:
- Spencer created the background, sprites, enemies, collectible items, and HUD elements for the first level.
- Menu screens like the pause menu, abilities list, loading screen, main menu, and level select were also designed.
- Each section was animated in Photoshop then compiled into a video with Premiere Pro, adding sound effects and music composed in Beepbox.
- The full project was exported and uploaded to YouTube to share the final animated video game experience.
This document provides instructions for using the iKITMovie animation software. It includes explanations of the interface, timeline, editing tools, and sound/music features. The main sections covered are:
1. An overview of the iKITMovie interface and what each button does.
2. Instructions for capturing images from a webcam, importing images, and adding them to the timeline as frames of animation.
3. Guidance on editing frames, zooming and navigating the timeline, and adding/removing sound effects and music.
4. Directions for playing back animations, exporting finished movies, and uploading videos to YouTube.
The document provides step-by-step explanations to help users learn the basics
The document discusses creating a musical experience for the game Resurface that changes based on how the player interacts with the world. It proposes designing a series of loops that can be modified or played through code to generate a unique soundtrack for each player. This would involve multi-layering music and connecting it to different in-game components, states, and actions so it can vary dynamically during gameplay. One approach described is composing a base loop and dividing instruments into groups that can then be duplicated, altered and substituted in as needed for smooth transitions.
The document discusses editing audio in a game maker demo game. It has simple mechanics where the goal is to destroy blocks by bouncing a ball. The audio can be edited by double clicking sounds to open a menu, then browsing files to replace the current sound with a new one recorded or edited by the user. For example, the breaking block sound was replaced with a metallic sound. Repeating this allows all sounds to be swapped for custom audio.
The document provides information about various games, apps, animations, artworks and design projects. It includes descriptions of elements like environments, characters, UI/UX, logos, icons, animations and gameplay for several mobile and desktop games. It also outlines features for music apps and includes links to videos and app store listings. Additionally, it showcases logos, digital artworks, oil paintings, illustrations and website designs created by the artist.
This document appears to be a catalog listing various baby and toddler toys, including descriptions and product codes. It includes items such as musical mobiles, puppets, teethers, rattles, bath toys, and educational toys that promote motor skills or interactive play. Specifications like dimensions, materials, battery requirements, and quantity are provided for each item.
To program sounds into a game, select the audio file to replace in the game maker sound tab, open the sound properties to browse and select the audio file, then program the audio into the game so it can be played with all other sounds added and exported.
Practicing Agile Accessibility in Large OrganizationsDevOps.com
In a lot of ways, accessibility is talked about and practiced as if it is a hydra dragon. Every time you think you’ve made an improvement, you cut off one head of the accessibility dragon and two heads grow back. In particular, there are a lot of myths out there that you can’t do accessibility in an agile development environment.
A lot of that stems from accessibility being thought of something you do manually, and also often only as user acceptance testing or usability testing. And while that’s an important part of accessibility, there is much more you can do to integrate accessibility into your agile processes to produce accessible products. In this live webinar, we will cover how to practice agile accessibility in large organizations.
The document describes Spencer Fox's process of creating animation for a video game. Some key points:
- Spencer created the background, sprites, enemies, collectible items, and HUD elements for the first level.
- Menu screens like the pause menu, abilities list, loading screen, main menu, and level select were also designed.
- Each section was animated in Photoshop then compiled into a video with Premiere Pro, adding sound effects and music composed in Beepbox.
- The full project was exported and uploaded to YouTube to share the final animated video game experience.
This talk is about some of the best practices in Media Playback and introduction to ExoPlayer - (Most of the content is taken from Ian Lake's Google I/O 16 talk)
This document summarizes Android audio APIs and OpenSL ES, an open sound library for embedded systems like Android. It discusses APIs like MediaPlayer, SoundPool, AudioTrack/AudioRecord and their limitations. OpenSL ES provides low-level audio control and is device independent but Android's implementation supports only a subset of OpenSL features. It provides code examples for creating an OpenSL engine and implementing audio playback and recording in a loopback sample application using OpenSL objects like AudioPlayer and AudioRecorder across two threads.
제 5회 DGMIT R&D 컨퍼런스: Sound Module With OperSLEsdgmit2009
This document describes how to create a sound module application for Android using OpenSL ES. It includes sections on OpenSL ES, sound player native methods for creating an audio engine and players, the Android activity for sound playback control, and the application layout. The native methods cover creating an engine, buffer queue audio player, asset audio player, and setting the playing state. The activity loads audio on creation and uses a spinner and buttons to select and play different audio files.
IMPLEMENTING VOICE CONTROL WITH THE ANDROID MEDIA SESSION API ON AMAZON FIRE ...Amazon Appstore Developers
The powerful combination of voice and Amazon Fire TV allows your customers to use speech to interact with their living room environment and enjoy a new level of convenience. In this workshop, we’ll see how you can use the Android Media Session API to enable voice control on media streaming apps on Fire TV.
In this workshop you will learn:
· How to integrate Android MediaSession API in your app
· How your customers can play, pause, skip forward, or rewind content with their voice on Amazon Fire TV
· How to quickly build a voice-enabled, high quality media streaming app using the Amazon Fire App Builder (FAB)
ExoPlayer is an alternative to Android's default MediaPlayer API. It supports features that MediaPlayer does not, such as DASH and SmoothStreaming adaptive playback. Unlike MediaPlayer, ExoPlayer is customizable and can be updated through Play Store updates. To implement ExoPlayer, add dependencies, create a player instance, bind it to a view, and prepare media sources. ExoPlayer handles audio, video, subtitles and DRM protected content on Android 4.4 and higher.
This document summarizes key mobile features available in the Flash runtime, including accelerometer, GPS, camera, video playback, audio control, and native extensions. It provides code examples for accessing the accelerometer, GPS, camera, and playing video. Native extensions allow adding third-party native code APIs to AIR apps to enhance performance and expand functions. They involve an Android project with native code and a Flex library defining the ActionScript API.
The document presents the objectives, functions, and design of a media player application. The objectives are to design a user-friendly, platform-independent media player that can play audio and video files. The major functions will include a graphical interface, playback controls, file browsing, and playlist functionality. The application will be designed using Java Media Framework and will have a modular structure divided into requirements analysis, design, coding, integration, and testing phases according to a work breakdown structure and Gantt chart project schedule.
The document discusses audio and video support and playback in the Android platform. It covers built-in encoding/decoding, playing media from resources, files and streams. It also covers playing JET interactive content and capturing audio using the MediaRecorder class. Supported audio formats include AAC, AMR, MP3, MIDI, Ogg Vorbis and PCM. Supported video formats include H.263, H.264 and MPEG-4.
Get On The Audiobus (CocoaConf Atlanta, November 2013)Chris Adamson
Audiobus is an iOS app that allows other apps to work together as an audio-processing toolchain: play your MIDI keyboard into one app, run it through filters in other apps, and mix it in a third. All in real-time, foreground or background. That such a thing is possible on the locked down iOS platform is remarkable enough, but what's even more remarkable is that hundreds of audio apps have added Audiobus support in the few months since its debut, including Apple's own GarageBand. In this session, we'll take a look at the Audiobus SDK and see how to create inputs, outputs, and filters that can be managed by the Audiobus app to process audio in collaboration with other apps on the device.
Google I/O Extended Seoul 2016 발표자료입니다.
The ExoPlayer provides many sophisticated features such as Dynamic adaptive streaming over HTTP (DASH), SmoothStreaming and Common Encryption
Get On The Audiobus (CocoaConf Boston, October 2013)Chris Adamson
Audiobus is an iOS app that allows other apps to work together as an audio-processing toolchain: play your MIDI keyboard into one app, run it through filters in other apps, and mix it in a third. All in real-time, foreground or background. That such a thing is possible on the locked down iOS platform is remarkable enough, but what's even more remarkable is that hundreds of audio apps have added Audiobus support in the few months since its debut, including Apple's own GarageBand. In this session, we'll take a look at the Audiobus SDK and see how to create inputs, outputs, and filters that can be managed by the Audiobus app to process audio in collaboration with other apps on the device.
The document discusses sound and media APIs for Java ME, including the MIDP 2.0 Sound API and the more full-featured Mobile Media API (MMAPI). It describes the key components and classes of the MIDP Sound API, including the Manager class for obtaining Players, the Player interface for controlling media playback, and Control classes like VolumeControl. The MMAPI introduces additional functionality around custom protocols, synchronization of multiple Players, recording, and metadata. The goal was to define scalable sound APIs that can support a wide range of devices and applications.
Droidcon 2011: Gingerbread and honeycomb, Markus Junginger, GreenrobotDroidcon Berlin
Gingerbread and Honeycomb
Markus Junginger, greenrobots
Google is developing Android rapidly: Since the release of the Android 1.0 SDK two and a half years ago, Honeycomb is the 9th (!) release of the SDK. Having catched up with its competition in previous releases, Android begins to innovate with new APIs like Near-Field-Communication (NFC). This session keeps developers up-to-date with the new APIs introduced in Android 2.3 Gingerbread and Android 3.0 Honeycomb. Developers will learn how to use state-of-the-art features while maintaining compatibility with devices running older versions of the OS.
Besides NFC, performance is probably the most important advancement in Gingerbread: Android 2.3 got a new parallel garbage collection, an improved JIT compiler and lots of new NDK features for high performance native apps. Also, the SIP API may trigger a new breed of IP telephony apps.
Honeycomb is perceived as the first “tablet version” of Android. One of the most important features are Activity fragments, which become the new building blocks for apps that target both smartphone and tablet screens. Nevertheless, tablets are just one aspect to Android 3.0. For example, developers can now speed up the UI dramatically by activating hardware accelerated rendering. The GPU is also the central part of the new animation framework and the Renderscript engine allowing 3D content and high performance shaders. Together with multicore CPU support, Honeycomb sets the stage for next-generation apps that exploit the desktop-like processing power.
The new APIs in 2.3 and 3.0 are a plentiful resource for developers to make their Android apps unique. This is the session you need to get started!
Android’s robust APIs can be used to add video, audio, and photo capabilities to app for playing and recording media. The Android multimedia framework includes support for playing variety of common media types, so that you can easily integrate audio, video and images into your applications.
The Study For A Sound Engineering And Recording ClassReggie621
The document summarizes the author's research into proposing a new sound engineering and recording class at their school. It identifies different types of recording equipment and software programs, and evaluates which would be the best fit according to criteria like cost and compatibility with the school's computers. The author recommends adopting Cakewalk SONAR 7 as the digital audio workstation software, along with microphones, finding the total estimated cost to be between $5,000-$6,000.
User Manual for Qditor V3, an all-in-one video editor which can help you easily make cool videos with the most powerful editing functions and hundreds of built-in effects.
The document describes the steps to design a 10-band software audio equalizer using MATLAB. It involves creating a GUI with 10 sliders, load, reset, play and stop buttons. Callback functions are added to each component to get the slider values and filter the audio. An equalizer plot is also added to visualize the filter response. Finally, 10 filters are designed, one for each band, and the equalizer is tested.
Audio post-production involves several key steps to create the final soundtrack for a visual media project. It typically includes dialogue editing, automated dialogue replacement (ADR) if needed, sound effects editing and design, Foley recording, music composition and editing, mixing, and mastering. During production, the location audio is recorded by the production sound mixer and team. After filming, the audio is transferred to editing formats and synced with the video dailies to allow the editing process to begin while additional shooting may still be underway. Once filming is complete, the editing phase continues to assemble the final cut.
This presentation by Professor Giuseppe Colangelo, Jean Monnet Professor of European Innovation Policy, was made during the discussion “The Intersection between Competition and Data Privacy” held at the 143rd meeting of the OECD Competition Committee on 13 June 2024. More papers and presentations on the topic can be found at oe.cd/ibcdp.
This presentation was uploaded with the author’s consent.
This presentation by OECD, OECD Secretariat, was made during the discussion “Artificial Intelligence, Data and Competition” held at the 143rd meeting of the OECD Competition Committee on 12 June 2024. More papers and presentations on the topic can be found at oe.cd/aicomp.
This presentation was uploaded with the author’s consent.
More Related Content
Similar to The sounds of Android (Android Makers 2018)
This talk is about some of the best practices in Media Playback and introduction to ExoPlayer - (Most of the content is taken from Ian Lake's Google I/O 16 talk)
This document summarizes Android audio APIs and OpenSL ES, an open sound library for embedded systems like Android. It discusses APIs like MediaPlayer, SoundPool, AudioTrack/AudioRecord and their limitations. OpenSL ES provides low-level audio control and is device independent but Android's implementation supports only a subset of OpenSL features. It provides code examples for creating an OpenSL engine and implementing audio playback and recording in a loopback sample application using OpenSL objects like AudioPlayer and AudioRecorder across two threads.
제 5회 DGMIT R&D 컨퍼런스: Sound Module With OperSLEsdgmit2009
This document describes how to create a sound module application for Android using OpenSL ES. It includes sections on OpenSL ES, sound player native methods for creating an audio engine and players, the Android activity for sound playback control, and the application layout. The native methods cover creating an engine, buffer queue audio player, asset audio player, and setting the playing state. The activity loads audio on creation and uses a spinner and buttons to select and play different audio files.
IMPLEMENTING VOICE CONTROL WITH THE ANDROID MEDIA SESSION API ON AMAZON FIRE ...Amazon Appstore Developers
The powerful combination of voice and Amazon Fire TV allows your customers to use speech to interact with their living room environment and enjoy a new level of convenience. In this workshop, we’ll see how you can use the Android Media Session API to enable voice control on media streaming apps on Fire TV.
In this workshop you will learn:
· How to integrate Android MediaSession API in your app
· How your customers can play, pause, skip forward, or rewind content with their voice on Amazon Fire TV
· How to quickly build a voice-enabled, high quality media streaming app using the Amazon Fire App Builder (FAB)
ExoPlayer is an alternative to Android's default MediaPlayer API. It supports features that MediaPlayer does not, such as DASH and SmoothStreaming adaptive playback. Unlike MediaPlayer, ExoPlayer is customizable and can be updated through Play Store updates. To implement ExoPlayer, add dependencies, create a player instance, bind it to a view, and prepare media sources. ExoPlayer handles audio, video, subtitles and DRM protected content on Android 4.4 and higher.
This document summarizes key mobile features available in the Flash runtime, including accelerometer, GPS, camera, video playback, audio control, and native extensions. It provides code examples for accessing the accelerometer, GPS, camera, and playing video. Native extensions allow adding third-party native code APIs to AIR apps to enhance performance and expand functions. They involve an Android project with native code and a Flex library defining the ActionScript API.
The document presents the objectives, functions, and design of a media player application. The objectives are to design a user-friendly, platform-independent media player that can play audio and video files. The major functions will include a graphical interface, playback controls, file browsing, and playlist functionality. The application will be designed using Java Media Framework and will have a modular structure divided into requirements analysis, design, coding, integration, and testing phases according to a work breakdown structure and Gantt chart project schedule.
The document discusses audio and video support and playback in the Android platform. It covers built-in encoding/decoding, playing media from resources, files and streams. It also covers playing JET interactive content and capturing audio using the MediaRecorder class. Supported audio formats include AAC, AMR, MP3, MIDI, Ogg Vorbis and PCM. Supported video formats include H.263, H.264 and MPEG-4.
Get On The Audiobus (CocoaConf Atlanta, November 2013)Chris Adamson
Audiobus is an iOS app that allows other apps to work together as an audio-processing toolchain: play your MIDI keyboard into one app, run it through filters in other apps, and mix it in a third. All in real-time, foreground or background. That such a thing is possible on the locked down iOS platform is remarkable enough, but what's even more remarkable is that hundreds of audio apps have added Audiobus support in the few months since its debut, including Apple's own GarageBand. In this session, we'll take a look at the Audiobus SDK and see how to create inputs, outputs, and filters that can be managed by the Audiobus app to process audio in collaboration with other apps on the device.
Google I/O Extended Seoul 2016 발표자료입니다.
The ExoPlayer provides many sophisticated features such as Dynamic adaptive streaming over HTTP (DASH), SmoothStreaming and Common Encryption
Get On The Audiobus (CocoaConf Boston, October 2013)Chris Adamson
Audiobus is an iOS app that allows other apps to work together as an audio-processing toolchain: play your MIDI keyboard into one app, run it through filters in other apps, and mix it in a third. All in real-time, foreground or background. That such a thing is possible on the locked down iOS platform is remarkable enough, but what's even more remarkable is that hundreds of audio apps have added Audiobus support in the few months since its debut, including Apple's own GarageBand. In this session, we'll take a look at the Audiobus SDK and see how to create inputs, outputs, and filters that can be managed by the Audiobus app to process audio in collaboration with other apps on the device.
The document discusses sound and media APIs for Java ME, including the MIDP 2.0 Sound API and the more full-featured Mobile Media API (MMAPI). It describes the key components and classes of the MIDP Sound API, including the Manager class for obtaining Players, the Player interface for controlling media playback, and Control classes like VolumeControl. The MMAPI introduces additional functionality around custom protocols, synchronization of multiple Players, recording, and metadata. The goal was to define scalable sound APIs that can support a wide range of devices and applications.
Droidcon 2011: Gingerbread and honeycomb, Markus Junginger, GreenrobotDroidcon Berlin
Gingerbread and Honeycomb
Markus Junginger, greenrobots
Google is developing Android rapidly: Since the release of the Android 1.0 SDK two and a half years ago, Honeycomb is the 9th (!) release of the SDK. Having catched up with its competition in previous releases, Android begins to innovate with new APIs like Near-Field-Communication (NFC). This session keeps developers up-to-date with the new APIs introduced in Android 2.3 Gingerbread and Android 3.0 Honeycomb. Developers will learn how to use state-of-the-art features while maintaining compatibility with devices running older versions of the OS.
Besides NFC, performance is probably the most important advancement in Gingerbread: Android 2.3 got a new parallel garbage collection, an improved JIT compiler and lots of new NDK features for high performance native apps. Also, the SIP API may trigger a new breed of IP telephony apps.
Honeycomb is perceived as the first “tablet version” of Android. One of the most important features are Activity fragments, which become the new building blocks for apps that target both smartphone and tablet screens. Nevertheless, tablets are just one aspect to Android 3.0. For example, developers can now speed up the UI dramatically by activating hardware accelerated rendering. The GPU is also the central part of the new animation framework and the Renderscript engine allowing 3D content and high performance shaders. Together with multicore CPU support, Honeycomb sets the stage for next-generation apps that exploit the desktop-like processing power.
The new APIs in 2.3 and 3.0 are a plentiful resource for developers to make their Android apps unique. This is the session you need to get started!
Android’s robust APIs can be used to add video, audio, and photo capabilities to app for playing and recording media. The Android multimedia framework includes support for playing variety of common media types, so that you can easily integrate audio, video and images into your applications.
The Study For A Sound Engineering And Recording ClassReggie621
The document summarizes the author's research into proposing a new sound engineering and recording class at their school. It identifies different types of recording equipment and software programs, and evaluates which would be the best fit according to criteria like cost and compatibility with the school's computers. The author recommends adopting Cakewalk SONAR 7 as the digital audio workstation software, along with microphones, finding the total estimated cost to be between $5,000-$6,000.
User Manual for Qditor V3, an all-in-one video editor which can help you easily make cool videos with the most powerful editing functions and hundreds of built-in effects.
The document describes the steps to design a 10-band software audio equalizer using MATLAB. It involves creating a GUI with 10 sliders, load, reset, play and stop buttons. Callback functions are added to each component to get the slider values and filter the audio. An equalizer plot is also added to visualize the filter response. Finally, 10 filters are designed, one for each band, and the equalizer is tested.
Audio post-production involves several key steps to create the final soundtrack for a visual media project. It typically includes dialogue editing, automated dialogue replacement (ADR) if needed, sound effects editing and design, Foley recording, music composition and editing, mixing, and mastering. During production, the location audio is recorded by the production sound mixer and team. After filming, the audio is transferred to editing formats and synced with the video dailies to allow the editing process to begin while additional shooting may still be underway. Once filming is complete, the editing phase continues to assemble the final cut.
Similar to The sounds of Android (Android Makers 2018) (20)
This presentation by Professor Giuseppe Colangelo, Jean Monnet Professor of European Innovation Policy, was made during the discussion “The Intersection between Competition and Data Privacy” held at the 143rd meeting of the OECD Competition Committee on 13 June 2024. More papers and presentations on the topic can be found at oe.cd/ibcdp.
This presentation was uploaded with the author’s consent.
This presentation by OECD, OECD Secretariat, was made during the discussion “Artificial Intelligence, Data and Competition” held at the 143rd meeting of the OECD Competition Committee on 12 June 2024. More papers and presentations on the topic can be found at oe.cd/aicomp.
This presentation was uploaded with the author’s consent.
This presentation by Nathaniel Lane, Associate Professor in Economics at Oxford University, was made during the discussion “Pro-competitive Industrial Policy” held at the 143rd meeting of the OECD Competition Committee on 12 June 2024. More papers and presentations on the topic can be found at oe.cd/pcip.
This presentation was uploaded with the author’s consent.
This presentation by OECD, OECD Secretariat, was made during the discussion “Competition and Regulation in Professions and Occupations” held at the 77th meeting of the OECD Working Party No. 2 on Competition and Regulation on 10 June 2024. More papers and presentations on the topic can be found at oe.cd/crps.
This presentation was uploaded with the author’s consent.
This presentation by OECD, OECD Secretariat, was made during the discussion “Pro-competitive Industrial Policy” held at the 143rd meeting of the OECD Competition Committee on 12 June 2024. More papers and presentations on the topic can be found at oe.cd/pcip.
This presentation was uploaded with the author’s consent.
This presentation by Katharine Kemp, Associate Professor at the Faculty of Law & Justice at UNSW Sydney, was made during the discussion “The Intersection between Competition and Data Privacy” held at the 143rd meeting of the OECD Competition Committee on 13 June 2024. More papers and presentations on the topic can be found at oe.cd/ibcdp.
This presentation was uploaded with the author’s consent.
This presentation by Thibault Schrepel, Associate Professor of Law at Vrije Universiteit Amsterdam University, was made during the discussion “Artificial Intelligence, Data and Competition” held at the 143rd meeting of the OECD Competition Committee on 12 June 2024. More papers and presentations on the topic can be found at oe.cd/aicomp.
This presentation was uploaded with the author’s consent.
Why Psychological Safety Matters for Software Teams - ACE 2024 - Ben Linders.pdfBen Linders
Psychological safety in teams is important; team members must feel safe and able to communicate and collaborate effectively to deliver value. It’s also necessary to build long-lasting teams since things will happen and relationships will be strained.
But, how safe is a team? How can we determine if there are any factors that make the team unsafe or have an impact on the team’s culture?
In this mini-workshop, we’ll play games for psychological safety and team culture utilizing a deck of coaching cards, The Psychological Safety Cards. We will learn how to use gamification to gain a better understanding of what’s going on in teams. Individuals share what they have learned from working in teams, what has impacted the team’s safety and culture, and what has led to positive change.
Different game formats will be played in groups in parallel. Examples are an ice-breaker to get people talking about psychological safety, a constellation where people take positions about aspects of psychological safety in their team or organization, and collaborative card games where people work together to create an environment that fosters psychological safety.
This presentation by Yong Lim, Professor of Economic Law at Seoul National University School of Law, was made during the discussion “Artificial Intelligence, Data and Competition” held at the 143rd meeting of the OECD Competition Committee on 12 June 2024. More papers and presentations on the topic can be found at oe.cd/aicomp.
This presentation was uploaded with the author’s consent.
This presentation by Juraj Čorba, Chair of OECD Working Party on Artificial Intelligence Governance (AIGO), was made during the discussion “Artificial Intelligence, Data and Competition” held at the 143rd meeting of the OECD Competition Committee on 12 June 2024. More papers and presentations on the topic can be found at oe.cd/aicomp.
This presentation was uploaded with the author’s consent.
Carrer goals.pptx and their importance in real lifeartemacademy2
Career goals serve as a roadmap for individuals, guiding them toward achieving long-term professional aspirations and personal fulfillment. Establishing clear career goals enables professionals to focus their efforts on developing specific skills, gaining relevant experience, and making strategic decisions that align with their desired career trajectory. By setting both short-term and long-term objectives, individuals can systematically track their progress, make necessary adjustments, and stay motivated. Short-term goals often include acquiring new qualifications, mastering particular competencies, or securing a specific role, while long-term goals might encompass reaching executive positions, becoming industry experts, or launching entrepreneurial ventures.
Moreover, having well-defined career goals fosters a sense of purpose and direction, enhancing job satisfaction and overall productivity. It encourages continuous learning and adaptation, as professionals remain attuned to industry trends and evolving job market demands. Career goals also facilitate better time management and resource allocation, as individuals prioritize tasks and opportunities that advance their professional growth. In addition, articulating career goals can aid in networking and mentorship, as it allows individuals to communicate their aspirations clearly to potential mentors, colleagues, and employers, thereby opening doors to valuable guidance and support. Ultimately, career goals are integral to personal and professional development, driving individuals toward sustained success and fulfillment in their chosen fields.
Suzanne Lagerweij - Influence Without Power - Why Empathy is Your Best Friend...Suzanne Lagerweij
This is a workshop about communication and collaboration. We will experience how we can analyze the reasons for resistance to change (exercise 1) and practice how to improve our conversation style and be more in control and effective in the way we communicate (exercise 2).
This session will use Dave Gray’s Empathy Mapping, Argyris’ Ladder of Inference and The Four Rs from Agile Conversations (Squirrel and Fredrick).
Abstract:
Let’s talk about powerful conversations! We all know how to lead a constructive conversation, right? Then why is it so difficult to have those conversations with people at work, especially those in powerful positions that show resistance to change?
Learning to control and direct conversations takes understanding and practice.
We can combine our innate empathy with our analytical skills to gain a deeper understanding of complex situations at work. Join this session to learn how to prepare for difficult conversations and how to improve our agile conversations in order to be more influential without power. We will use Dave Gray’s Empathy Mapping, Argyris’ Ladder of Inference and The Four Rs from Agile Conversations (Squirrel and Fredrick).
In the session you will experience how preparing and reflecting on your conversation can help you be more influential at work. You will learn how to communicate more effectively with the people needed to achieve positive change. You will leave with a self-revised version of a difficult conversation and a practical model to use when you get back to work.
Come learn more on how to become a real influencer!
XP 2024 presentation: A New Look to Leadershipsamililja
Presentation slides from XP2024 conference, Bolzano IT. The slides describe a new view to leadership and combines it with anthro-complexity (aka cynefin).
The importance of sustainable and efficient computational practices in artificial intelligence (AI) and deep learning has become increasingly critical. This webinar focuses on the intersection of sustainability and AI, highlighting the significance of energy-efficient deep learning, innovative randomization techniques in neural networks, the potential of reservoir computing, and the cutting-edge realm of neuromorphic computing. This webinar aims to connect theoretical knowledge with practical applications and provide insights into how these innovative approaches can lead to more robust, efficient, and environmentally conscious AI systems.
Webinar Speaker: Prof. Claudio Gallicchio, Assistant Professor, University of Pisa
Claudio Gallicchio is an Assistant Professor at the Department of Computer Science of the University of Pisa, Italy. His research involves merging concepts from Deep Learning, Dynamical Systems, and Randomized Neural Systems, and he has co-authored over 100 scientific publications on the subject. He is the founder of the IEEE CIS Task Force on Reservoir Computing, and the co-founder and chair of the IEEE Task Force on Randomization-based Neural Networks and Learning Systems. He is an associate editor of IEEE Transactions on Neural Networks and Learning Systems (TNNLS).
This presentation by OECD, OECD Secretariat, was made during the discussion “The Intersection between Competition and Data Privacy” held at the 143rd meeting of the OECD Competition Committee on 13 June 2024. More papers and presentations on the topic can be found at oe.cd/ibcdp.
This presentation was uploaded with the author’s consent.
17. Step 4.1 - MediaPlayer
class MainActivity : AppCompatActivity() {
private var player: MediaPlayer? = null
override fun onCreate(savedInstanceState: Bundle?) {
...
findViewById<View>(R.id.button_play).setOnClickListener { play() }
}
override fun onStop() {
super.onStop()
player?.release()
}
private fun play() {
player?.release()
player = MediaPlayer.create(this, R.raw.best_fart_ever).apply {
setOnCompletionListener { release() }
start()
}
}
}
18. MediaPlayer
• Very simple to use
• Exists since API1
• Handles resources, over
the network streaming,
even with DRM
PROS
19. MediaPlayer
• Very simple to use
• Exists since API1
• Handles resources, over
the network streaming,
even with DRM
PROS
• Not extensible
• Dependent on OS for bug
fixes and new features
CONS
22. class MainActivity : AppCompatActivity() {
private lateinit var player: ExoPlayer
override fun onCreate(savedInstanceState: Bundle?) {
...
player = ExoPlayerFactory.newSimpleInstance(this, DefaultTrackSelector())
}
...
private fun play() {
stop()
with (player) {
val source = ExtractorMediaSource.Factory(
DefaultDataSourceFactory(this@MainActivity, "fartheaven")
).createMediaSource(Uri.parse("asset:///best_fart_ever.m4a"))
prepare(source)
playWhenReady = true
}
}
}
Step 4.2 - ExoPlayer
23. ExoPlayer
• Much more flexible and
extensible
• Better suited for complex
use cases
• External library
• Handles caching, adaptive
playback, composition, etc
PROS
24. ExoPlayer
• Much more flexible and
extensible
• Better suited for complex
use cases
• External library
• Handles caching, adaptive
playback, composition, etc
PROS
• More complex to
apprehend
• MinSdk 16 (or 19 for
encryption support)
• Big library
CONS
48. 1. You need a
listener
class AudioFocusListener(...) : AudioManager.OnAudioFocusChangeListener {
override fun onAudioFocusChange(focusChange: Int) {
when(focusChange) {
AudioManager.AUDIOFOCUS_GAIN -> {
// start playing or reset volume
}
AudioManager.AUDIOFOCUS_LOSS_TRANSIENT_CAN_DUCK -> {
// reduce volume
}
AudioManager.AUDIOFOCUS_LOSS_TRANSIENT -> {
// pause without dropping focus
}
AudioManager.AUDIOFOCUS_LOSS -> {
// focus was lost
}
}
}
}
49. 1. You need a
listener
class AudioFocusListener(...) : AudioManager.OnAudioFocusChangeListener {
override fun onAudioFocusChange(focusChange: Int) {
when(focusChange) {
AudioManager.AUDIOFOCUS_GAIN -> {
// start playing or reset volume
}
AudioManager.AUDIOFOCUS_LOSS_TRANSIENT_CAN_DUCK -> {
// reduce volume
}
AudioManager.AUDIOFOCUS_LOSS_TRANSIENT -> {
// pause without dropping focus
}
AudioManager.AUDIOFOCUS_LOSS -> {
// focus was lost
}
}
}
}
50. 1. You need a
listener
class AudioFocusListener(...) : AudioManager.OnAudioFocusChangeListener {
override fun onAudioFocusChange(focusChange: Int) {
when(focusChange) {
AudioManager.AUDIOFOCUS_GAIN -> {
// start playing or reset volume
}
AudioManager.AUDIOFOCUS_LOSS_TRANSIENT_CAN_DUCK -> {
// reduce volume
}
AudioManager.AUDIOFOCUS_LOSS_TRANSIENT -> {
// pause without dropping focus
}
AudioManager.AUDIOFOCUS_LOSS -> {
// focus was lost
}
}
}
}
51. 1. You need a
listener
class AudioFocusListener(...) : AudioManager.OnAudioFocusChangeListener {
override fun onAudioFocusChange(focusChange: Int) {
when(focusChange) {
AudioManager.AUDIOFOCUS_GAIN -> {
// start playing or reset volume
}
AudioManager.AUDIOFOCUS_LOSS_TRANSIENT_CAN_DUCK -> {
// reduce volume
}
AudioManager.AUDIOFOCUS_LOSS_TRANSIENT -> {
// pause without dropping focus
}
AudioManager.AUDIOFOCUS_LOSS -> {
// focus was lost
}
}
}
}
52. 1. You need a
listener
class AudioFocusListener(...) : AudioManager.OnAudioFocusChangeListener {
override fun onAudioFocusChange(focusChange: Int) {
when(focusChange) {
AudioManager.AUDIOFOCUS_GAIN -> {
// start playing or reset volume
}
AudioManager.AUDIOFOCUS_LOSS_TRANSIENT_CAN_DUCK -> {
// reduce volume
}
AudioManager.AUDIOFOCUS_LOSS_TRANSIENT -> {
// pause without dropping focus
}
AudioManager.AUDIOFOCUS_LOSS -> {
// focus was lost
}
}
}
}
53. 2. Request the focus
private fun requestAudioFocus() {
audioFocusListener = AudioFocusListener(…)
val requestResult = audioManager.requestAudioFocus(audioFocusListener,
AudioManager.STREAM_MUSIC
AudioManager.AUDIOFOCUS_GAIN)
if (requestResult == AudioManager.AUDIOFOCUS_REQUEST_GRANTED) {
// start playing
} else {
// handle error
}
}
54. 2. Request the focus
… API >= 26
val attributes = AudioAttributesCompat.Builder()
.setContentType(AudioAttributesCompat.CONTENT_TYPE_MUSIC)
.setUsage(AudioAttributesCompat.USAGE_MEDIA)
.build()
// keep the request around
request = AudioFocusRequest.Builder(AudioManager.AUDIOFOCUS_GAIN)
.setAudioAttributes(attributes.unwrap as AudioAttributes)
.setOnAudioFocusChangeListener(audioFocusListener)
.build()
audioManager.requestAudioFocus(request)
94. Good Reads
• Understanding MediaSession http://bit.ly/mediasession
• Building a Video Player app in Android http://bit.ly/
vplayerandroid
• ADB Podcast Episode 85 Focus on Audio http://bit.ly/
adbaudiofocus
• Styling Android’s MIDI posts http://bit.ly/midiandroid