This project report describes the development of an application on the DaVinci platform under the guidance of Prof. TK Dan. Akash Sahoo and Abhijit Tripathy, 7th semester B.Tech students, developed an application to take advantage of the DaVinci's integrated ARM and TMS320C64x+ DSP cores. They ported MontaVista Linux and DSP/BIOS to the DaVinci evaluation module board to enable the application and provide OS support across the hybrid processor system.
One of the biggest issues for a developer – whether they are an engineer at an OEM or working for a mobile AI application startup – is that their apps are at the mercy of pre-set power and performance settings as defined by OEMs or Silicon vendors. So how can a developer break through that barrier when it seems their hands are tied behind their backs? The Snapdragon Power Optimization SDK allows developers to control the CPU and GPU frequency much more finely from their own application logic. This provides developers with more control within the bounds of the power/thermal framework.
Redfish and python-redfish for Software Defined InfrastructureBruno Cornec
How the new Redfish protocol will help achieving the promises of a Software Defined Infrastructure, and which new projects are needed such as python-redfish and Alexandria to support it
One of the biggest issues for a developer – whether they are an engineer at an OEM or working for a mobile AI application startup – is that their apps are at the mercy of pre-set power and performance settings as defined by OEMs or Silicon vendors. So how can a developer break through that barrier when it seems their hands are tied behind their backs? The Snapdragon Power Optimization SDK allows developers to control the CPU and GPU frequency much more finely from their own application logic. This provides developers with more control within the bounds of the power/thermal framework.
Redfish and python-redfish for Software Defined InfrastructureBruno Cornec
How the new Redfish protocol will help achieving the promises of a Software Defined Infrastructure, and which new projects are needed such as python-redfish and Alexandria to support it
Docker Container As A Service
X11 Linux apps on mac in a container.
In container Java development with STS or Eclipse in a container.
Docker UCP and swarm load balancing with Interlock.
It's a pivotal challenge to update the software in embedded systems due to many restrictions such as unreliable network and power supply, limited bandwidth, harsh environment, etc. This slide aims to provide the background knowledge and the open source tool to achieve the software update in embedded systems.
Introduction to DragonBoard 410c Development Board and Starting Development of Your Embedded Linux-based IIoT Device
Watch the recording at: http://bit.ly/2AskXuW
Clear Containers is an Open Containers Initiative (OCI) “runtime” that launches an Intel VT-x secured hypervisor rather than a standard Linux container. An introduction of Clear Containers will be provided, followed by an overview of CNM networking plugins which have been created to enhance network connectivity using Clear Containers. More specifically, we will show demonstrations of using VPP with DPDK and SRIO-v based networks to connect Clear Containers. Pending time we will provide and walk through a hands on example of using VPP with Clear Containers.
About the speaker: Manohar Castelino is a Principal Engineer for Intel’s Open Source Technology Center. Manohar has worked on networking, network management, network processors and virtualization for over 15 years. Manohar is currently an architect and developer with the ciao (clearlinux.org/ciao) and the clear containers (https://github.com/01org/cc-oci-runtime) projects focused on networking. Manohar has spoken at many Container Meetups and internal conferences.
Multi-OS Continuous Packaging with docker and Project-Builder.orgBruno Cornec
Docker is now a mature techology used for contained execution of applications.
It can also be used successfully to support a Continuous Packaging approach
We will explain and demonstrate how to combine it with project-builder.org in order to help upstream projects distributing seamlessly packages for their code, at whatever step of their development life cycle.
We'll explain how to build a new container, setting it up for this usage, then preparing the delivery of the project content in order to finally build packages in it for the hosted distribution and publishing them for immediate consumption as part of the package management system.
This continuous packaging approach supports multiple repositories type, operating systems/Linux distributions, build environements and repositories managers
Isn’t it Ironic that a Redfish is software defining you Bruno Cornec
Ironic already helps you deploying your bare metal servers as part of your OPenStack based cloud infrastructure.
A new ongoing effort is going on between various acors to standardize server management in a software defined way using a new RESTful API called Redfish specification (WIP definition at http://www.redfishspecification.org)
We will explain our current work to create a python library to offer some interesting Redfish specification abstraction useful to Ironic (power management, information pickup, ...) and how we intend to adapt Ironic in order to add support for the Redfish specification in the future
Redfish is an IPMI replacement standardized by the DMTF. It provides a RESTful API for server out of band management and a lightweight data model specification that is scalable, discoverable and extensible. (Cf: http://www.dmtf.org/standards/redfish). This presentation will start by detailing its role and the features it provides with examples. It will demonstrate the benefits it provides to system administrator by providing a standardized open interface for multiple servers, and also storage systems.
We will then cover various tools such as the DMTF ones and the python-redfish library (Cf: https://github.com/openstack/python-redfish) offering Redfish abstractions.
Learning from ZFS to Scale Storage on and under Containersinside-BigData.com
Evan Powell presented this deck at the MSST 2107 Mass Storage Conference.
"What is so new about the container environment that a new class of storage software is emerging to address these use cases? And can container orchestration systems themselves be part of the solution? As is often the case in storage, metadata matters here. We are implementing in the open source OpenEBS.io some approaches that are in some regards inspired by ZFS to enable much more efficient scale out block storage for containers that itself is containerized. The goal is to enable storage to be treated in many regards as just another application while, of course, also providing storage services to stateful applications in the environment."
Watch the video: http://wp.me/p3RLHQ-gPs
Learn more: blog.openebs.io
and
http://storageconference.us
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Quieting noisy neighbor with Intel® Resource Director TechnologyMichelle Holley
A typical computer server on the cloud hosted multiple VMs. Each VM hosted an independent application. The operation of a mixture of applications in cloud requires proper resource management and it's critical to QoS, this session is to study the impact of different neighbors on an application’s performance and to show how Intel® RDT can help to detect and mitigate a noisy-neighbor situation.
About the authors: Sunil is senior cloud performance engineer at Intel working on cloud performance and optimization for Oracle cloud. Prior to this he worked on service assurance and orchestration products for Openstack cloud. Sunil has 10+ years of experience working on different software products for server management. He holds Masters in Computer Science from IIT Chicago.
Khun Ban is a cloud performance engineer manager leading a team to optimize cloud performance and TCO. He has over twenty years of enterprise software development experience. His current focus is on providing customer with best cloud experience. He received his B.S. degree in Computer Science and Engineering from the University of Washington in 1995.
Long-term Maintenance Model of Embedded Industrial Linux DistributionSZ Lin
To introduce a robust, secure and reliable platform for the industrial environments is a key challenge; moreover, the platform needs to survive for a long time (more than 10+ years). There are many good solutions aiming to meet these requirements, such as LTSI (Long Term Support Initiative) and CIP (Civil Infrastructure Platform). However, it still needs a high amount of maintenance and development costs in handling SoC/ hardware board in-house patch, non-upstream driver and keep source code consistent with different SoC and platform afterwards.
In this presentation, SZ Lin will introduce how to operate long-term maintenance model of embedded industrial Linux distribution. In addition, he will also address the building, deploying and testing architecture and workflow for producing a robust, secure and reliable platform.
Vector Packet Technologies such as DPDK and FD.io/VPP revolutionized software packet processing initially for discrete appliances and then for NFV use cases. Container based VNF deployments and it's supporting NFV infrastructure is now the new frontier in packet processing and has number of strong advocates among both traditional Comms Service Providers and in the Cloud. This presentation will give an overview of how DPDK and FD.io/VPP project are rising to meet the challenges of the Container dataplane. The discussion will provide an overview of the challenges, recent new features and what is coming soon in this exciting new area for the software dataplane, in both DPDK and FD.io/VPP!
About the speaker: Ray Kinsella has been working on Linux and various other open source technologies for about twenty years. He is recently active in open source communities such as VPP and DPDK but is a constant lurker in many others. He is interested in the software dataplane and optimization, virtualization, operating system design and implementation, communications and networking.
Bugs happen. Identifying and fixing them is part of the development process. This tutorial demonstrates one of the key tools in the embedded Linux developer’s toolbox: the GNU Debugger, GDB.
You will begin by using GDB to debug a program running on a target device. You will learn about debug symbols: how build them into programs and libraries, and the places that GDB will go looking for them. Next, you will perform basic debugging tasks, including setting breakpoints, stepping through code, examining variables and modifying variables. After that you will lean about GDB command files and how they can help you by automating certain tasks. You will receive a handy GDB cribsheet to help you with all of this. If time allows, we will discuss how to use GDB to analyse core dumps so that you can perform a post-mortem on a crashed program
Docker Container As A Service
X11 Linux apps on mac in a container.
In container Java development with STS or Eclipse in a container.
Docker UCP and swarm load balancing with Interlock.
It's a pivotal challenge to update the software in embedded systems due to many restrictions such as unreliable network and power supply, limited bandwidth, harsh environment, etc. This slide aims to provide the background knowledge and the open source tool to achieve the software update in embedded systems.
Introduction to DragonBoard 410c Development Board and Starting Development of Your Embedded Linux-based IIoT Device
Watch the recording at: http://bit.ly/2AskXuW
Clear Containers is an Open Containers Initiative (OCI) “runtime” that launches an Intel VT-x secured hypervisor rather than a standard Linux container. An introduction of Clear Containers will be provided, followed by an overview of CNM networking plugins which have been created to enhance network connectivity using Clear Containers. More specifically, we will show demonstrations of using VPP with DPDK and SRIO-v based networks to connect Clear Containers. Pending time we will provide and walk through a hands on example of using VPP with Clear Containers.
About the speaker: Manohar Castelino is a Principal Engineer for Intel’s Open Source Technology Center. Manohar has worked on networking, network management, network processors and virtualization for over 15 years. Manohar is currently an architect and developer with the ciao (clearlinux.org/ciao) and the clear containers (https://github.com/01org/cc-oci-runtime) projects focused on networking. Manohar has spoken at many Container Meetups and internal conferences.
Multi-OS Continuous Packaging with docker and Project-Builder.orgBruno Cornec
Docker is now a mature techology used for contained execution of applications.
It can also be used successfully to support a Continuous Packaging approach
We will explain and demonstrate how to combine it with project-builder.org in order to help upstream projects distributing seamlessly packages for their code, at whatever step of their development life cycle.
We'll explain how to build a new container, setting it up for this usage, then preparing the delivery of the project content in order to finally build packages in it for the hosted distribution and publishing them for immediate consumption as part of the package management system.
This continuous packaging approach supports multiple repositories type, operating systems/Linux distributions, build environements and repositories managers
Isn’t it Ironic that a Redfish is software defining you Bruno Cornec
Ironic already helps you deploying your bare metal servers as part of your OPenStack based cloud infrastructure.
A new ongoing effort is going on between various acors to standardize server management in a software defined way using a new RESTful API called Redfish specification (WIP definition at http://www.redfishspecification.org)
We will explain our current work to create a python library to offer some interesting Redfish specification abstraction useful to Ironic (power management, information pickup, ...) and how we intend to adapt Ironic in order to add support for the Redfish specification in the future
Redfish is an IPMI replacement standardized by the DMTF. It provides a RESTful API for server out of band management and a lightweight data model specification that is scalable, discoverable and extensible. (Cf: http://www.dmtf.org/standards/redfish). This presentation will start by detailing its role and the features it provides with examples. It will demonstrate the benefits it provides to system administrator by providing a standardized open interface for multiple servers, and also storage systems.
We will then cover various tools such as the DMTF ones and the python-redfish library (Cf: https://github.com/openstack/python-redfish) offering Redfish abstractions.
Learning from ZFS to Scale Storage on and under Containersinside-BigData.com
Evan Powell presented this deck at the MSST 2107 Mass Storage Conference.
"What is so new about the container environment that a new class of storage software is emerging to address these use cases? And can container orchestration systems themselves be part of the solution? As is often the case in storage, metadata matters here. We are implementing in the open source OpenEBS.io some approaches that are in some regards inspired by ZFS to enable much more efficient scale out block storage for containers that itself is containerized. The goal is to enable storage to be treated in many regards as just another application while, of course, also providing storage services to stateful applications in the environment."
Watch the video: http://wp.me/p3RLHQ-gPs
Learn more: blog.openebs.io
and
http://storageconference.us
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Quieting noisy neighbor with Intel® Resource Director TechnologyMichelle Holley
A typical computer server on the cloud hosted multiple VMs. Each VM hosted an independent application. The operation of a mixture of applications in cloud requires proper resource management and it's critical to QoS, this session is to study the impact of different neighbors on an application’s performance and to show how Intel® RDT can help to detect and mitigate a noisy-neighbor situation.
About the authors: Sunil is senior cloud performance engineer at Intel working on cloud performance and optimization for Oracle cloud. Prior to this he worked on service assurance and orchestration products for Openstack cloud. Sunil has 10+ years of experience working on different software products for server management. He holds Masters in Computer Science from IIT Chicago.
Khun Ban is a cloud performance engineer manager leading a team to optimize cloud performance and TCO. He has over twenty years of enterprise software development experience. His current focus is on providing customer with best cloud experience. He received his B.S. degree in Computer Science and Engineering from the University of Washington in 1995.
Long-term Maintenance Model of Embedded Industrial Linux DistributionSZ Lin
To introduce a robust, secure and reliable platform for the industrial environments is a key challenge; moreover, the platform needs to survive for a long time (more than 10+ years). There are many good solutions aiming to meet these requirements, such as LTSI (Long Term Support Initiative) and CIP (Civil Infrastructure Platform). However, it still needs a high amount of maintenance and development costs in handling SoC/ hardware board in-house patch, non-upstream driver and keep source code consistent with different SoC and platform afterwards.
In this presentation, SZ Lin will introduce how to operate long-term maintenance model of embedded industrial Linux distribution. In addition, he will also address the building, deploying and testing architecture and workflow for producing a robust, secure and reliable platform.
Vector Packet Technologies such as DPDK and FD.io/VPP revolutionized software packet processing initially for discrete appliances and then for NFV use cases. Container based VNF deployments and it's supporting NFV infrastructure is now the new frontier in packet processing and has number of strong advocates among both traditional Comms Service Providers and in the Cloud. This presentation will give an overview of how DPDK and FD.io/VPP project are rising to meet the challenges of the Container dataplane. The discussion will provide an overview of the challenges, recent new features and what is coming soon in this exciting new area for the software dataplane, in both DPDK and FD.io/VPP!
About the speaker: Ray Kinsella has been working on Linux and various other open source technologies for about twenty years. He is recently active in open source communities such as VPP and DPDK but is a constant lurker in many others. He is interested in the software dataplane and optimization, virtualization, operating system design and implementation, communications and networking.
Bugs happen. Identifying and fixing them is part of the development process. This tutorial demonstrates one of the key tools in the embedded Linux developer’s toolbox: the GNU Debugger, GDB.
You will begin by using GDB to debug a program running on a target device. You will learn about debug symbols: how build them into programs and libraries, and the places that GDB will go looking for them. Next, you will perform basic debugging tasks, including setting breakpoints, stepping through code, examining variables and modifying variables. After that you will lean about GDB command files and how they can help you by automating certain tasks. You will receive a handy GDB cribsheet to help you with all of this. If time allows, we will discuss how to use GDB to analyse core dumps so that you can perform a post-mortem on a crashed program
Cross-channel marketing that centers around your customer
Differentiate your brand through truly rich customer experiences by activating your customer data to create intelligent interactions, every time.
Chicago Docker Meetup Presentation - MediaflyMediafly
Bryan Murphy's presentation from the 2nd Chicago Docker meetup on March 12, 2014 at Mediafly HQ. In his presentation, Bryan explains how we use Docker right now at Mediafly in production.
IMAGE CAPTURE, PROCESSING AND TRANSFER VIA ETHERNET UNDER CONTROL OF MATLAB G...Christopher Diamantopoulos
This implemented DSP system utilizes TCP socket communication. Upon message reception, it decides the appropriate process to be executed based on cases which can be categorized as follows:
1) image capture
2) image transfer
3) image processing
4) sensor calibration
A user-friendly MATLAB GUI, named DIPeth, facilitates the system's control.
WinOps meetup April 2016 DevOps lessons from Microsoft \\Build\DevOpsGroup
Some DevOps lessons from the 2016 Microsoft Build conference that were presented at the London WinOps meetup in April 2016. Most of the material was taken from the Microsoft presentations available here - https://channel9.msdn.com/Events/Build/2016?wt.mc_id=build_hp
Organizations can pick between numerous free community-supported distributions of the Linux operating system. In the data center and on AWS, Azure, GKE, CloudFlare, DigitalOcean, and other public clouds, these free versions are available as part of the default configuration. Why, then, would you pay for Linux?
These slides, based on a webinar hosted by Red Hat and leading IT research firm EMA, provide insights into what has and has not worked related to the adoption of free versus subscription-based Linux distributions.
The Civil Infrastructure Platform (CIP) is creating a super long-term supported (SLTS) open source "base layer" for industrial grade software. We have been working on security fixes and some backported features since the moment we decided that Linux kernel v4.4 would be the first SLTS version. In this talk, we will describe the current development
status of the SLTS kernel and testing environment. First, we'll explain our kernel development policy. Then, we'll describe the functionality that has been backported. Second, we'll talk about testing before using our base-layer on real products. We have been developing a test framework to collect and share test results. To build it, we don't want to duplicate existing work such as KernelCI, Fuego and others. For that reason, we are trying to collaborate and contribute to such projects.
Pivotal Cloud Foundry 2.4: A First LookVMware Tanzu
Join Dan Baskette and Jared Ruckle for a view into Pivotal Cloud Foundry (PCF) 2.4 capabilities with demos and expert Q&A. We’ll review the latest features for Pivotal’s flagship app platform, including the following:
- Native zero downtime push and native zero downtime restarts
- Dynamic egress policies
- Operations Manager updates
- Zero downtime stack updates to cflinuxfs3
- Zero downtime OS updates
- New pathways protected by TLS
- New scanning tools to assist with compliance
Plus much more!
Presenters : Dan Baskette, Director, Technical Marketing, Jared Ruckle, Principal Product Marketing Manager
1. A Project Report
on
“Application development on DaVinvi
Platform”
Under the guidance of
Prof TK Dan
Department of Electronics and Communication
NIT Rourkela
Submitted By
Akash Sahoo &
Abhijit Tripathy
B.Tech 7th
Sem
108EI010 &
108EC013
5. We shall discuss about the processor in the next section along with the OS they can run.
Features
Core
ARM926EJ-S™(CPU at 300Mhz)
TMS320C64x+™DSP Core at 600 MHz
Memory
ARM: 16K I-Cache, 8K D-Cache, 32K
TCM RAM,
8K Boot ROM
DSP: 32K L1 I-Cache, 32K L1 D-Cache,
128K L2 Cache, 64K Boot ROM
HD Coprocessors
Real-Time HD-HD Transcoding Up to
1080p
Multi-format (mf) HD to mf HD or mf
SD
Up to 2× real time for HD-to-SD
transcode
Real-time HD-HD transcoding for PVR
Video Encode and Decode
HD 720p H.264 BP encode
DM355 Architecture
6. One of the important interfacing component is VPSS (Video Processing Sub System) : Video Processing Sub
System (VPSS) is comprised of two blocks: Front End (VPFE) Back End (VPBE). VPFE Consists of CCD
Controller (CCDC), Statistics Engine (H3A), Previewer, Resizer whereas VPBE consists of On-screen Display
(OSD) and Video Encoding (VENC). When previewing the image the image goes from camera to VPFE, then
to DDR, SCR(Switched Central Resource) and then back to output. In case of image processing the image
from DDR ram goes to EDMA then to cache and then to processor where it is processing and then it goes
back to the VPFE for display. For more details on architecture on DM355 texas instruments online manual
can be referenced
7. OS for the Board
A problem comes up when powering such a hybrid Davinci SOC (System on Chip) which
includes both a GPP(General Purpose Processor) and a DSP (Digital Signal Processor).
The OS that must run this must schedule tasks properly and have proper IPC (inter
processor communication). For the DSP, the task scheduler is a light weight scheduler
called the DSP/BIOS. For GPP we consider the world of Linux.
DSP/BIOS:
Lets first go over to the world of DSP/BIOS- the Real time OS. DSP/BIOS kernel is a
scalable real-time multi-tasking kernel, designed specifically for the TI DSP platforms.
With its associated networking, microprocessor-DSP communications, and driver
modules, DSP/BIOS kernel provides a solid foundation for even the most sophisticated
DSP applications. DSP/BIOS kernel provides standardized TI DSP platforms to support
rapid application migration and is also optimized to run on the DSP cores on Davinci
devices.
Features:
DSP/BIOS kernel provides a rich set of C-callable deterministic kernel services that
enable developers to create sophisticated applications without compromising real-time
deadlines. DSP/BIOS kernel is highly scalable with multithreading configurations
requiring as few as 1K words.
DSP/BIOS kernel is configurable to minimize memory footprint. Configuration can be
done either statically through graphical or scripting tools or dynamically using
operating system calls. In addition to excluding unused modules, static configuration
further reduces target memory footprint by eliminating the code required to
dynamically create and delete operating system objects such as threads and
semaphores. The main features includes Multithreading, IPC Mechanisms, Multicore
Support, Interrupt Management, Power Management, OS-Aware Analysis Debug .
TI does not charge any for this OS. This OS may not be suitable for everyday use, but it
may be best for RT use due to the three factors – Scalability, Speed, Low Latency.
8. LINUX OS for the GPP:
Why a Linux:
The advantages include :
• Linux is royalty-free.
• Linux already includes driver software for a huge number of devices and,
because current drivers are well documented and include source code,
developing new drivers is easy.
• The wealth of software tools included with Linux can substantially decrease
development time.
• Linux ability to run on generic hardware decreases the costs associated with
purchasing development systems.
• Because Linux is being used extensively in universitie
understand it- including its internals
Linux already includes driver software for a huge number of devices and,
because current drivers are well documented and include source code,
developing new drivers is easy.
wealth of software tools included with Linux can substantially decrease
Linux ability to run on generic hardware decreases the costs associated with
purchasing development systems.
Because Linux is being used extensively in universities, the pool of people who
including its internals- is growing every day.
Linux already includes driver software for a huge number of devices and,
because current drivers are well documented and include source code,
wealth of software tools included with Linux can substantially decrease
Linux ability to run on generic hardware decreases the costs associated with
s, the pool of people who
9. MontaVista – the Linux flavour : MontaVista Software, Inc. is the leader in
embedded Linux commercialization. For over 10 years, MontaVista has been
helping embedded developers get the most out of open source by adding
commercial quality, integration, hardware enablement, expert support, and the
resources of the MontaVista development community. Because MontaVista
customers enjoy faster time to market, more competitive device functionality,
and lower total cost, more devices have been deployed with MontaVista than
with any other Linux.
For more info visit : http://www.mvista.com/product_detail_mvl6.php
PORTING MONTAVISTA AND DSP/BIOS TO OUR DAVINCI BOARD:
BNCJSDNF The main process / flow chart of the complete process is given below.
For further info on the process/ specific commands please look into the manual
spruf73a.pdf given with the EVM kit.
INSTALLING THE TARGET LINUX SOFTWARE:
1. Install the following files to the /opt/mvpro directory
• ./mvl_4_0_1_demo_sys_setuplinux.bin
• ./mvl_4_0_1_demo_target_setuplinux.bin
• ./mvl_4_0_1_demo_lsp_setuplinux_#_#_#_#.bin
2. Untar the following tar files installed from the /opt/mvpro directory
• host $ tar zxf mvltools4.0.1-no-target.tar.gz
• host $ tar zxf mvl4.0.1-target_path.tar.gz
• host $ tar zxf DaVinciLSP-#_#_#_#.tar.gz
3. Installing the DVSDK tools to /home/user/dvsdk directory
• ./dvsdk_setuplinux_#_#_#_#.bin
• ./xdc_setuplinux_#_#_#_#.bin
SETTING UP THE NFS SERVER FILE SYSTEM:
• Type the following commands to create the target NFS file system folder
host $ cd /home/useracct
host $ mkdir -p workdir/filesys
host $ cd workdir/filesys
• copy the binary files to the NFS folder
$ cp –a /opt/mv_pro_4.0.1/montavista/pro/devkit/arm/v5t_le/target/* .
$ chown -R useracct opt
• Edit the /etc/exports file on the host Linux workstation. Add the
10. following line for exporting the filesys area, substituting your user
name for useracct.
/home/useracct/workdir/filesys *(rw,no_root_squash,no_all_squash,sync)
• Restarts the service
host $ /usr/sbin/exportfs -av
host $ /sbin/service nfs restart
Verify that the server firewall is turned off:
host $ /etc/init.d/iptables status
If the firewall is running, disable it:
host $ /etc/init.d/iptables stop
• Change the path tto the following to addd the new os the the executable
files
PATH=/opt/mv_pro_4.0.1/montavista/pro/devkit/arm/v5t_le/bin:
/opt/mv_pro_4.0.1/montavista/pro/bin:
/opt/mv_pro_4.0.1/montavista/common/bin:$PATH
BUILDING THE LINUX RTOS
To rebuild the Linux Kernel, follow these steps:
1) Log in to your user account
2) Set the PLATFORM variable in the Rules.make file as described in
3) Use commands like the following to make a local working copy of the
MontaVista Linux Support Package (LSP) in your home directory.
This copy contains the embedded Linux 2.6.10 kernel plus the
DaVinci drivers. If you installed in a location other than
/opt/mv_pro_4.0.1, use your location in the cp command.
host $ cd /home/useracct
host $ mkdir -p workdir/lsp
host $ cd workdir/lsp
11. host $ cp -R /opt/mv_pro_4.0.1/montavista/pro/devkit/lsp/ti-davinci .
4) Use the following commands to configure the kernel using the
DaVinci defaults. Note that CROSS_COMPILE specifies a prefix for
the executables that is used during compilation:
host $ cd ti-davinci/linux-2.6.10_mvl401
host $ make ARCH=arm CROSS_COMPILE=arm_v5t_le-
davinci_dm355_evm_defconfig
5) To modify the kernel options, you will need to use a configuration
command such as make menuconfig or make xconfig. To enable
the MontaVista default kernel options, use the following command:
host $ make ARCH=arm CROSS_COMPILE=arm_v5t_le- checksetconfig
6) Compile the kernel using the following command:
host $ make ARCH=arm CROSS_COMPILE=arm_v5t_le- uImage
7) If the kernel is configured with any loadable modules (that is,
selecting M for a module in menuconfig), use the following
commands to rebuild and install these modules:
host $ make ARCH=arm CROSS_COMPILE=arm_v5t_le- modules
host $ make ARCH=arm CROSS_COMPILE=arm_v5t_le-
INSTALL_MOD_PATH=/home/useracct/workdir/filesys modules_install
REBUILDING THE DVEVM SOFTWARE FOR THE TARGET:
To place demo files in the /opt/dvevm directory, you need to rebuild the
DVEVM software. To do this, follow these steps:
1) Change directory to dvsdk_#_#.
2) Edit the dvsdk_#_#/Rules.make file.
■ Set PLATFORM to match your EVM board as follows:
PLATFORM=dm355
12. ■ Set DVSDK_INSTALL_DIR to the top-level DVEVM installation
directory as follows:
DVSDK_INSTALL_DIR=/home/useracct/dvsdk_#_#
■ Make sure EXEC_DIR points to the opt directory on the NFS
exported file system as follows:
EXEC_DIR=/home/useracct/workdir/filesys/opt/dvsdk/dm355
■ Make sure MVTOOL_DIR points to the MontaVista Linux tools
directory as follows:
MVTOOL_DIR=/opt/mv_pro_4.0.1/montavista/pro/devkit/arm/v5t_le
■ Make sure LINUXKERNEL_INSTALL_DIR is defined as follows:
INUXKERNEL_INSTALL_DIR=/home/useracct/workdir/lsp/ti-davinci/linux-
2.6.10_mvl401
3) While in the same directory that contains Rules.make, use the
following commands to build the DVSDK demo applications and put
the resulting binaries on the target file system specified by
EXEC_DIR.
host $ make clean
host $ make
host $ make install
BOOTING THE NEW LINUX KERNEL:
1) Power on the EVM board, and abort the automatic boot sequence by
pressing a key in the console window
2) Set the following environment variables. (This assumes you are
starting from a default, clean U-Boot environment. See Section 3.1,
Default Boot Configuration for information on the U-Boot default
environment.)
EVM # setenv bootcmd 'dhcp;bootm'
13. EVM # setenv serverip nfs server ip address
EVM # setenv bootfile uImage
EVM # setenv bootargs mem=116M console=ttyS0,115200n8
root=/dev/mtdblock3 rw rootfstype=yaffs2 ip=dhcp
video=davincifb:vid0=720x576x16,2500K:vid1=720x576x16,
2500K:osd0=720x576x16,2025K
davinci_enc_mngr.ch0_output=COMPOSITE
davinci_enc_mngr.ch0_mode=$(videostd)
EVM # saveenv
Note that the setenv bootargs command should be typed on a single line.
3) Boot the board:
EVM # bootm
14. IMAGE PROCESSING : SCALING
In computer graphics, image scaling is the process of resizing a digital image.
Scaling is a non-trivial process that involves a trade-off between efficiency,
smoothness and sharpness. As the size of an image is increased, so the pixels which
comprise the image become increasingly visible, making the image appears soft.
Conversely, reducing an image will tend to enhance its smoothness and apparent
sharpness. Apart from fitting a smaller display area, image size is most commonly
decreased (or subsampled or downsampled) in order to produce thumbnails. Enlarging
an image (upsampling or interpolating) is generally less common. The main reason for
this is that in zooming an image, it is not possible to discover any more information
in the image than already exists, and image quality inevitably suffers. However, there
are several methods of increasing the number of pixels that an image contains, which
evens out the appearance of the original pixels.
An image size can be changed in several ways. Consider doubling the size of the
following image:
The easiest way of doubling its size is nearest-neighbour interpolation, replacing every
pixel with four pixels of the same color:
The resulting image is larger than the original, and preserves all the original detail, but has
undesirable jagginess. The diagonal lines of the W, for example, now show the
characteristic stairway shape. Other scaling methods are better at preserving smooth
contours in the image. For example, bilinear interpolation produces the following result:
Linear (or bilinear, in two dimensions) interpolation is typically better than the nearest-
neighbor system for changing the size of an image, but causes some undesirable softening
of details and can still be somewhat jagged. Better scaling methods include bicubic
interpolation:
15. There are also advanced magnifying methods developed for computer graphics called
supersampling. The best results are achieved when magnifying images with low
resolution and few colors.
The Scale Image command enlarges or reduces the physical size of the image by
changing the number of pixels it contains. It changes the size of the contents of the
image and resizes the canvas accordingly.
It operates on the entire image. If your image has layers of
making the image smaller could shrink some of them down to nothing, since a layer
cannot be less than one pixel wide or high. If this happens, you will be warned before
the operation is performed.
Quality
To change the image size, either
must be added. The process you use determines the quality of the result. The
Interpolation drop down list provides a selection of available methods of
interpolating the color of pixels in a scaled image:
Interpolation
• None: No interpolation is used. Pixels are simply enlarged or removed, as they
are when zooming. This method is low quality, but very fast.
• Linear: This method is relatively fast, but still provides fairly good results.
• Cubic: The method that pro
• Sinc (Lanczos 3): New with GIMP
resizing.
There are also advanced magnifying methods developed for computer graphics called
supersampling. The best results are achieved when magnifying images with low
command enlarges or reduces the physical size of the image by
changing the number of pixels it contains. It changes the size of the contents of the
image and resizes the canvas accordingly.
It operates on the entire image. If your image has layers of different sizes,
making the image smaller could shrink some of them down to nothing, since a layer
cannot be less than one pixel wide or high. If this happens, you will be warned before
To change the image size, either some pixel has to be removed or new pixels
must be added. The process you use determines the quality of the result. The
drop down list provides a selection of available methods of
interpolating the color of pixels in a scaled image:
: No interpolation is used. Pixels are simply enlarged or removed, as they
are when zooming. This method is low quality, but very fast.
: This method is relatively fast, but still provides fairly good results.
: The method that produces the best results, but also the slowest method.
: New with GIMP-2.4, this method gives less blur in important
There are also advanced magnifying methods developed for computer graphics called
supersampling. The best results are achieved when magnifying images with low
command enlarges or reduces the physical size of the image by
changing the number of pixels it contains. It changes the size of the contents of the
different sizes,
making the image smaller could shrink some of them down to nothing, since a layer
cannot be less than one pixel wide or high. If this happens, you will be warned before
some pixel has to be removed or new pixels
must be added. The process you use determines the quality of the result. The
drop down list provides a selection of available methods of
: No interpolation is used. Pixels are simply enlarged or removed, as they
: This method is relatively fast, but still provides fairly good results.
duces the best results, but also the slowest method.
2.4, this method gives less blur in important
16. START
CREATE THE OUTPUT FILE WITH GIVEN
NAME
CALCULATE THE IMAGE SIZE USING GIVEN WIDTH AND
HEIGHT
SUCCESS?
TAKE THE INPUT AND OUTPUT FILES AS
COMMAND LINE ARGUMENTS
SUCCESS?
CALCULATE THE IMAGE SIZE USING GIVEN WIDTH AND
HEIGHT
ALLOCATE THE MEMORY FOR OUTPUT FILE
SCAN THE INPUT FILE LEFT TO RIGHT,TOP TO BOTTOM AND COPY
EACH PIXEL TWICE TILL ROW ENDS
PRINT
ERROR
GOTO NEXT ROW
END OF INPUT FILE?
SCAN THE IMAGE IN THE ALLOCATED MEMORY AND
DUPLICATE EACH ROW
A B
YES
YES
NO
NO
YES
NO
EXIT
STORE THE PROCESSED IMAGE FROM THE ALLOCATED
MEMORY TO THE OUTPUT FILE
A
B
17. #include stdio.h
#include ../../include/image.h
#include ../../include/tistdtypes.h
int main(int argc, char *argv[])
{
char *in_file = NULL; //name of source input yuv, which is to be
scaled up.
char *out_file = NULL; //name of output yuv file which is obtained by
image
//processing
FILE *input_img = NULL; //pointer to source file
FILE *output_img = NULL; //pointer to destination file which will
contain the
// converted zoomed image
//Taking input and output yuv file names and height and width of
input image
//as command line arguments
if (argc = 1)
{
in_file = ../../data/frame.yuv;
out_file = ../../data/frame_zoom.yuv;
width = WIDTH;
height = HEIGHT;
}
else if (argc == 3)
{
in_file = argv[1];
out_file = argv[2];
width = WIDTH;
height = HEIGHT;
}
else if(argc == 5)
{
in_file = argv[1];
out_file = argv[2];
width = atoi(argv[3]);
height = atoi(argv[4]);
}
else
{
fprintf(stderr, usage, argv[0]);
exit(1);
}
/* open file streams for input and output */
if ((input_img = fopen(in_file, rb)) == NULL)
{
fprintf(stderr, ERROR: can't read file %sn, in_file);
goto end;
}
if ( (output_img = fopen(out_file, wb)) == NULL)
{
fprintf(stderr, ERROR: can't read file %sn, in_file);
goto end;
}
APPENDIX : Program to zoom
18. //Function call to convert the input YUV422 format image to
zoomed image
convert_to_zoom(input_img,output_img);
//freeing pointers buffers
end:
if(input_img)
fclose(input_img);
if(output_img)
fclose(output_img);
return(0);
}
void convert_to_zoom(FILE * in, FILE * out)
{
Uint16 *p_infile = NULL; //Pointer to memory to hold input file
byte stream data
Uint16 *p_outfile = NULL; // Pointer to memory to hold processed
output byte
//stream
Uint16 *in_pixel_uy = NULL;
Uint16 *in_pixel_vy = NULL;
Uint8 *out_pixel_chroma = NULL;
Uint16 *p_outfile_odd_row = NULL;
Uint16 *p_outfile_even_row = NULL;
Uint8 *out_pixel_u = NULL;
Uint8 *out_pixel_v = NULL;
Uint16 *out_pixel =NULL;
int numRead;
int img_size, i = 0, j; // Total size of input image in terms of
pixels
//Calculate input image size
img_size = ( (width * height) * 2 );
//Allocating space to array for holding all pixel data from input
file
p_infile = (Uint16 *)malloc(img_size);
p_outfile = (Uint16 *)malloc(img_size*4);
// read the YUV422 image file (raw format - no header)
if ((numRead = fread(p_infile,1,img_size,in)) != img_size)
{
printf(ERROR: could not read a complete image from input
file - %d vs. %dn, numRead, img_size);
}
else
{
19. //positioning pointers for luminance and chroma pixels in
input file
in_pixel_uy = p_infile;
in_pixel_vy = in_pixel_uy + 1;
out_pixel = p_outfile;
for ( i=0 ; i height; i++)
{
for( j=0; j width/2; j++)
{
out_pixel_chroma = out_pixel;
out_pixel_u = out_pixel_chroma + 4;
out_pixel_v = out_pixel_u - 2;
//here each pixel value is copied to its next
higher
//position.thus row size becomes double than input
row
*(out_pixel++) = *in_pixel_uy;
*(out_pixel++) = *in_pixel_uy;
*(out_pixel++) = *in_pixel_vy;
*(out_pixel++) = *in_pixel_vy;
#if 1
*(out_pixel_u) = *(out_pixel_chroma);
out_pixel_chroma = out_pixel - 1;
*(out_pixel_v) = *(out_pixel_chroma);
#endif
in_pixel_uy +=2;
in_pixel_vy +=2;
}
out_pixel += (width*2);
}
p_outfile_even_row = out_pixel - (img_size*2) ;
p_outfile_odd_row = p_outfile_even_row + width*2 ;
for(i=0 ; i height; i++)
{
for( j=0; j width*2; j++)
{
*(p_outfile_odd_row++)
=*(p_outfile_even_row++);
//here entire row is replicated
}
p_outfile_odd_row += width*2;
p_outfile_even_row += width*2;
}
//once finished now store the result in the output file.
fwrite((void*)p_outfile,2,img_size*2,out);
}
return;
}
20. 1. In case you want to compile the program for converting “raw” image to it's
zoomed form, execute following commands.
First, goto folder in the DM355 filesystem at which the “img2zoom.c” program
is stored.
Then compile the source program.
host $ arm_v5t_le-gcc img2zoom.c -o ../binaries/img2zoom
The above command would generate “img2zoom” executable at
“home/img_processing/prac/binaries”.
2. Execute the generated “img2zoom” binary by following command.
target $ cd ~/workdir/filesys/home/img_processing/prac/binaries
target $ ./img2zoom Processed_images/frame50.yuv
Processed_images/frame_zoom.yuv 360 240
This would generate a raw image “frame_zoom.yuv” which would be
“zoom” of original raw image “frame.yuv”.
3. Now conver t the “frame_zoom” image from “raw” format to “jpg” format
using “jpeg encoder” of DM355 DVEVM board.
target$ ./jpegenc Processed_images/frame_zoom.yuv
Processed_images/frame_zoom.jpg
You will be able to see following output on DM355 console.
@0x000df4a4:[T:0x400176d8] jpegenc - main jpegenc
@0x000dfc61:[T:0x400176d8] jpegenc - Application started.
@0x000eeefc:[T:0x400176d8] jpegenc - Encoder process returned - 0x0, 46457 bytes)
INPUT IMAGE OUTPUT IMAGE
21. References :
• TI DM355 sp73a.pdf
• Ti.com/processors
• A LeopardBoard Application: PhotoFrame by Pedro Elías Alpízar Salas, Marco Emilio
Madrigal Solano
• http://processors.wiki.ti.com/index.php/JPEG_Viewer
• http://processors.wiki.ti.com/index.php/DMAI_GStreamer_Plug-
In_Getting_Started_Guide#DM355_software_installation_.28DVSDK_3.10.00.19.29