HIS'15 website tells us "Our lives increasingly depend on the correct functioning of software". But whilst true in itself, software is just one of the links in a system-chain; each needing to be as strong as the others for a satisfactory outcome. History may have branded software as the weakest link, but can that be said today? A system is an entity complete in its context; and judged subjectively by its black-box behaviour. And when faced with its failure it isn't acceptable to claim that "my bit worked"! All technologies we utilise are fallible, as are the processes we use to create them: Hardware, Software, Optics, Acoustics, RF, Mechanics, Test, Reproduction, Maintenance ... Perfection is still reserved for the gods. Technologies must work together in the system, and historic silos do nothing to encourage this. So how good do systems need to be; how close to achieving it are we; and does one size fit all? And perhaps most challengingly, can the disciplines complement one another so the whole is stronger than the weakest links?
A quick 10min run-through the Economic Footprint situation of the UK Electronic System COmmunity, two years on from the ESCO Report. (Full Excel spreadsheet available via the link on the first slide)
This document summarizes a workshop on computing for cyber-physical systems in 2025. It discusses that new sciences like quantum computing will take 15-20 years to become technologies and capabilities that can be included in mainstream products. Consumer needs, rather than professional needs, will drive the technologies. Systems focus on both physical and virtual components from various domains like silicon, software, optics, and manufacturing will be important. Commercial technologies emerging from consumer products must be utilized for professional applications, rather than paying high costs for special technologies.
We know that Science feeds into Products, but the way that it does is poorly understood. As a result many Research Programs fail to achieve the Exploitation anticipated by their creators or their funders. So these are tales of mankind's actual Exploitation of Science; told in the hope that they will first lead to wider understanding and later to optimisation of the processes involved.
Seminar at UoCambridge Dept of Materials Science and Metallurgy 3mar15. Today, we are making the most complex machines that the world has ever know; tomorrow they will be even more so. We sell them in large volumes to customers who value their functionality but have no recognition of the technology that enables them; even less, our roles in their creation. Technology products don't grow on trees, they are painstakingly designed and constructed one molecule at a time by man(kind). And the more complex the machines the ever more complex they will have been to design and construct. Alas, out of sight is out of mind; so as our technology becomes less visible to the user, then their support for its evolution will diminish! To correct this we must educate the public; but before we can do that we must understand what we do ourselves ... This talk could just be the start of that process.
The document discusses the evolution of electronics and computing technologies from 1974 to the present day. It notes that in 1974, electronics was still a professional field with valves still in use, while today electronics is integrated into many consumer products. It also summarizes Moore's Law of transistors doubling every two years, and how this has driven the rapid advancement of technologies, from early processors with 30-40 transistors to today's chips with billions of transistors. The document advocates for heavy reuse of technologies, methods and tools to enable efficient product design in the modern era of enormous design complexity.
Carving the Perfect Engineer (EWME'16, 11may16)Ian Phillips
- The document discusses how the field of electronics and the role of electronic engineers has changed since 1974. It focuses on the increasing complexity of electronic systems and how they now incorporate various technologies beyond just electronics, requiring engineers to have broader system design skills.
- Moore's Law and continuous technology advancement has enabled billions of transistors to be incorporated into systems, but this also increases the design challenge, driving the need for greater design reuse and productivity methods.
- ARM discusses their role in addressing this challenge by providing processor cores, interconnect IP, design tools and libraries, and partnerships to help engineers efficiently develop optimized electronic system solutions.
After 52yrs of being a Design Engineer, I officially leave ARM and Retire on 30nov16. You can't reach a milestone like retirement without thinking about your life as a design engineer, and how technology has changed during that time ... And if what I did could be useful lessons in any way to those that follow in my footsteps.
Uo Liverpool 11feb16: As I start the next stage of my career, I recall the changes that have happened in Electronics since I was in your position. It was a great time and career choice.for me ... But can you hope for the same in your careers? I hope to show you that through history Design Engineers have always had exciting and challenging careers. And whilst my era was undoubtedly very special, there is no sign whatsoever of it being unique. Today's Electronic Systems are the integration of the most exciting technologies that mankind has ever invented; technologies which all continue to advance at an alarming pace. Technology change means challenge, learning and adapt ion.Being a Design Engineering is a learning journey of lifetime, an exciting journey that begins when you Graduate.
A quick 10min run-through the Economic Footprint situation of the UK Electronic System COmmunity, two years on from the ESCO Report. (Full Excel spreadsheet available via the link on the first slide)
This document summarizes a workshop on computing for cyber-physical systems in 2025. It discusses that new sciences like quantum computing will take 15-20 years to become technologies and capabilities that can be included in mainstream products. Consumer needs, rather than professional needs, will drive the technologies. Systems focus on both physical and virtual components from various domains like silicon, software, optics, and manufacturing will be important. Commercial technologies emerging from consumer products must be utilized for professional applications, rather than paying high costs for special technologies.
We know that Science feeds into Products, but the way that it does is poorly understood. As a result many Research Programs fail to achieve the Exploitation anticipated by their creators or their funders. So these are tales of mankind's actual Exploitation of Science; told in the hope that they will first lead to wider understanding and later to optimisation of the processes involved.
Seminar at UoCambridge Dept of Materials Science and Metallurgy 3mar15. Today, we are making the most complex machines that the world has ever know; tomorrow they will be even more so. We sell them in large volumes to customers who value their functionality but have no recognition of the technology that enables them; even less, our roles in their creation. Technology products don't grow on trees, they are painstakingly designed and constructed one molecule at a time by man(kind). And the more complex the machines the ever more complex they will have been to design and construct. Alas, out of sight is out of mind; so as our technology becomes less visible to the user, then their support for its evolution will diminish! To correct this we must educate the public; but before we can do that we must understand what we do ourselves ... This talk could just be the start of that process.
The document discusses the evolution of electronics and computing technologies from 1974 to the present day. It notes that in 1974, electronics was still a professional field with valves still in use, while today electronics is integrated into many consumer products. It also summarizes Moore's Law of transistors doubling every two years, and how this has driven the rapid advancement of technologies, from early processors with 30-40 transistors to today's chips with billions of transistors. The document advocates for heavy reuse of technologies, methods and tools to enable efficient product design in the modern era of enormous design complexity.
Carving the Perfect Engineer (EWME'16, 11may16)Ian Phillips
- The document discusses how the field of electronics and the role of electronic engineers has changed since 1974. It focuses on the increasing complexity of electronic systems and how they now incorporate various technologies beyond just electronics, requiring engineers to have broader system design skills.
- Moore's Law and continuous technology advancement has enabled billions of transistors to be incorporated into systems, but this also increases the design challenge, driving the need for greater design reuse and productivity methods.
- ARM discusses their role in addressing this challenge by providing processor cores, interconnect IP, design tools and libraries, and partnerships to help engineers efficiently develop optimized electronic system solutions.
After 52yrs of being a Design Engineer, I officially leave ARM and Retire on 30nov16. You can't reach a milestone like retirement without thinking about your life as a design engineer, and how technology has changed during that time ... And if what I did could be useful lessons in any way to those that follow in my footsteps.
Uo Liverpool 11feb16: As I start the next stage of my career, I recall the changes that have happened in Electronics since I was in your position. It was a great time and career choice.for me ... But can you hope for the same in your careers? I hope to show you that through history Design Engineers have always had exciting and challenging careers. And whilst my era was undoubtedly very special, there is no sign whatsoever of it being unique. Today's Electronic Systems are the integration of the most exciting technologies that mankind has ever invented; technologies which all continue to advance at an alarming pace. Technology change means challenge, learning and adapt ion.Being a Design Engineering is a learning journey of lifetime, an exciting journey that begins when you Graduate.
This document discusses the history and evolution of computing technologies from ancient mechanical devices to modern electronic systems. It begins with early mechanical computers from ancient Greece and progresses through developments like Babbage's Difference Engine, early electronic computers, and integrated circuits. The document notes that modern computing platforms involve contributions from global teams and enterprises, with most technologies being reused to improve productivity. It argues that businesses must focus on competitive products rather than technology for its own sake, and that designer productivity has become the main driver of technology advancement as complexity increases exponentially with Moore's Law.
Computing Platforms for the XXIc - DSD/SEAA KeynoteIan Phillips
Wikipedia defines Platform as "A raised level surface on which people or things can stand". A more familiar technical interpretation applies to the hardware and OS configuration applicable to the execution of software; most frequently applicable to highly stable PC or Mainframe architectures. But the world has changed a lot since serious computing power moved into the embedded consumer arena. Now, with runs of many millions for single products, the argument for customisation is much more justifiable; so the traditional view of platforms is struggling against a tide of individuality. Can the ARM architecture bring stability back into this chaos, or is something else needed? Isaac Newton realised the reality of platforms when he talked of standing on the shoulders of giants. A platform is a stable place where engineers and scientists can stand to achieve more than they would otherwise have done. So our XXI Century Platforms are the shape to deliver improved Productivity, Reuse, Quality, TTM, Cost, etc. for the System Products we are now charged to deliver. Its business, stupid!
Building frameworks: from concept to completionRuben Goncalves
What are considerations when building a framework/library? How does that apply to OutSystems components? In this session, we’ll do a deep dive into the importance of addressing certain concepts like code granularity, and architecture, in order to create useful, future-proof and coherent frameworks that deliver the best possible developer experience.
The trials and tribulations of providing engineering infrastructure TechExeter
The document discusses lessons learned from running engineering infrastructure at Arm. It discusses several key points:
1) Shared infrastructure can lead to compromise and additional effort with no value if used for multiple disciplines. Specialized environments for each discipline are preferred.
2) Automation is critical to avoid inefficiencies from rapid growth outpacing manual processes. Automate where possible.
3) Ubiquitous shared storage breeds laziness and poor data management practices that do not scale. More controlled use of storage is needed.
4) Educating engineers on responsible infrastructure use is important, as not all have expertise in large-scale systems. Partnership and guidance is preferred over being solely "the police".
The document proposes a scalable AI accelerator ASIC platform for edge AI processing. It describes a high-level architecture based on a scalable AI compute fabric that allows for fast learning and inference. The architecture is flexible and can scale from single-chip solutions to multi-chip solutions connected via high-speed interfaces. It also provides details on the AI compute fabric, processing elements, and how the platform could enable high-performance edge AI processing.
So, you want to build a hardware product? Every so often, a device comes along that changes the way we live our daily lives and things are never the same again. With today's digital technology, such devices may come more frequently than in the past - personal gadgets you cannot live without. What’s inside? What makes it tick? How do you find out? In this sharing session, Mark will provide an introduction to hardware hacking and why it matters, going through some quick tips on getting cosy with hardware to find out what makes it tick. Mark (MK FX) is a founder of Bazinga! Pte Ltd, a technology development and prototyping company that builds gadgets from ideas. An engineer since birth, because if you can dream it, think it - you can build it.
Computing Platforms for the 21C - 25feb14Ian Phillips
(See http://youtu.be/Z0YU0T5cR6E )
A Compute Platform is normally considered to be the highly stable HW and SW architecture associated with Mainframe or PC computers. But the 21 century is bringing serious computing power to the hands of the consumer and computers that don't look like computers have totally eclipsed traditional computing market. Does this change the definition of the Compute Platform in the 21C?
## By Ian Phillips, Uo.Liverpool. 25feb14. http://ianp24.blogspot.co.uk/
## Opinions expressed are my own.
This document discusses using the Eclipse integrated development environment to develop software for ARM microcontrollers. It recommends using the free and open-source GNU toolset, including compilers, linkers and debuggers, along with the Eclipse IDE to develop software for the inexpensive Philips LPC2106 ARM microcontroller board. While the free tools are not as full-featured as commercial development packages that cost thousands of dollars, they provide a low-cost option for students and hobbyists to begin experimenting with 32-bit ARM development. The document also recommends the inexpensive Olimex LPC2106 boards as hardware to use with the free software tools.
This document discusses stack-based buffer overflows, including:
- How they occur when a program writes outside a fixed-length buffer, potentially corrupting data or code.
- Their history and use in attacks like the 2001 Code Red worm.
- Technical details like how the stack and registers work.
- Career opportunities in security analysis and development to prevent and respond to such vulnerabilities.
- The ethical responsibilities of developers to write secure code and disclose vulnerabilities responsibly.
Will ARM be the new Mainstream in our Data Centers? @Rejekts Paris 2024Tobias Schneck
As I have been working with my new Apple Mac M1 for over a year, I was wondering why ARM is not used more in regular application workload scenarios? ARM for desktop computing is really stable, seamless, reliable and for me a game changer - when will we recognize the same for our servers? Especially in times when energy and raw materials are expensive, we should also benefit from the efficiency of ARM technology in our Data Centers. So what’s missing?
We have Kubernetes on ARM, images!
ADITECH CUSTOMER MEET-2015 was held at Hotel RAMADA, Millennium Business Park, Navi Mumbai. This event was sponsored by Intel and Innodisk Taiwan, Event was attended by 39 System Integrator partners from Mumbai, Pune, Delhi, Surat and Banglore. Intel has presented the IOT opportunities for SME. Innodisk has enlightened SI partners on latest technologies used in Industrial grade SSD. Aditech has demonstrated Industrial grade solutions and transportation solutions. Aditech's presentation was on Industrial grade Panel PC's and industrial communication. The event is ended with lucky draw and group photograph followed by networking dinner and ADITECH office visit
The document summarizes the dangers of insecure IP cameras and how many are exposed online. It describes how hackers found a vulnerability in TRENDnet cameras in 2012 that allowed unauthorized access. It then discusses how tools like SHODAN can be used to search for and enumerate additional exposed cameras online, noting that many contain high-definition video feeds showing sensitive locations and activities.
Erlang is a programming language used to build distributed, fault-tolerant systems. It was developed in the late 1980s at Ericsson for building telecom applications. Erlang uses lightweight processes to model systems and provides features like process isolation, message passing, distribution, and hot code swapping. The Open Telecom Platform (OTP) is a set of libraries and design principles for building robust Erlang applications. Many companies use Erlang for building scalable and reliable systems, including WhatsApp, Klarna, and Ericsson.
Philippe Coval gave a presentation on prototyping IoT devices using GNU/Linux and the IoTivity framework. The presentation covered initializing IoTivity clients and servers, registering resources, discovering resources over the network, and implementing basic GET and POST operations to control resources representing physical devices. Code examples were provided using both C++ for Linux systems and C for resource-constrained MCUs.
This document discusses building resource efficient distributed systems at scale. It covers several key lessons:
1) Understand deeply the relationship between latency, bandwidth, and capacity across infrastructure levels as bandwidth increases faster than latency and the gap between bandwidth and storage capacity widens over time.
2) Distributed systems fundamentally deal with distance and having multiple components, so failure is expected. However, developing distributed applications should be similar to non-distributed ones by concealing complexity.
3) Leverage cheaper processors from the consumer device market which have better price/performance than servers and reduce power costs significantly. Automation can also reduce people costs dominating large data centers.
The document contains questions from various rounds of a quiz on slightly geeky topics related to IT, music, general computing knowledge, and jobs. The rounds cover warm-up questions, MP3s and music, TCP/IP, Linux, viruses, project management, networking, and software testing.
Hands-On Workshop on Performance Optimization for Intel Xeon Phi Processor Family x200 (formerly Knights Landing) from Colfax International. More information at http://colfaxresearch.com/knl-webinar/
This document discusses the history and evolution of computing technologies from ancient mechanical devices to modern electronic systems. It begins with early mechanical computers from ancient Greece and progresses through developments like Babbage's Difference Engine, early electronic computers, and integrated circuits. The document notes that modern computing platforms involve contributions from global teams and enterprises, with most technologies being reused to improve productivity. It argues that businesses must focus on competitive products rather than technology for its own sake, and that designer productivity has become the main driver of technology advancement as complexity increases exponentially with Moore's Law.
Computing Platforms for the XXIc - DSD/SEAA KeynoteIan Phillips
Wikipedia defines Platform as "A raised level surface on which people or things can stand". A more familiar technical interpretation applies to the hardware and OS configuration applicable to the execution of software; most frequently applicable to highly stable PC or Mainframe architectures. But the world has changed a lot since serious computing power moved into the embedded consumer arena. Now, with runs of many millions for single products, the argument for customisation is much more justifiable; so the traditional view of platforms is struggling against a tide of individuality. Can the ARM architecture bring stability back into this chaos, or is something else needed? Isaac Newton realised the reality of platforms when he talked of standing on the shoulders of giants. A platform is a stable place where engineers and scientists can stand to achieve more than they would otherwise have done. So our XXI Century Platforms are the shape to deliver improved Productivity, Reuse, Quality, TTM, Cost, etc. for the System Products we are now charged to deliver. Its business, stupid!
Building frameworks: from concept to completionRuben Goncalves
What are considerations when building a framework/library? How does that apply to OutSystems components? In this session, we’ll do a deep dive into the importance of addressing certain concepts like code granularity, and architecture, in order to create useful, future-proof and coherent frameworks that deliver the best possible developer experience.
The trials and tribulations of providing engineering infrastructure TechExeter
The document discusses lessons learned from running engineering infrastructure at Arm. It discusses several key points:
1) Shared infrastructure can lead to compromise and additional effort with no value if used for multiple disciplines. Specialized environments for each discipline are preferred.
2) Automation is critical to avoid inefficiencies from rapid growth outpacing manual processes. Automate where possible.
3) Ubiquitous shared storage breeds laziness and poor data management practices that do not scale. More controlled use of storage is needed.
4) Educating engineers on responsible infrastructure use is important, as not all have expertise in large-scale systems. Partnership and guidance is preferred over being solely "the police".
The document proposes a scalable AI accelerator ASIC platform for edge AI processing. It describes a high-level architecture based on a scalable AI compute fabric that allows for fast learning and inference. The architecture is flexible and can scale from single-chip solutions to multi-chip solutions connected via high-speed interfaces. It also provides details on the AI compute fabric, processing elements, and how the platform could enable high-performance edge AI processing.
So, you want to build a hardware product? Every so often, a device comes along that changes the way we live our daily lives and things are never the same again. With today's digital technology, such devices may come more frequently than in the past - personal gadgets you cannot live without. What’s inside? What makes it tick? How do you find out? In this sharing session, Mark will provide an introduction to hardware hacking and why it matters, going through some quick tips on getting cosy with hardware to find out what makes it tick. Mark (MK FX) is a founder of Bazinga! Pte Ltd, a technology development and prototyping company that builds gadgets from ideas. An engineer since birth, because if you can dream it, think it - you can build it.
Computing Platforms for the 21C - 25feb14Ian Phillips
(See http://youtu.be/Z0YU0T5cR6E )
A Compute Platform is normally considered to be the highly stable HW and SW architecture associated with Mainframe or PC computers. But the 21 century is bringing serious computing power to the hands of the consumer and computers that don't look like computers have totally eclipsed traditional computing market. Does this change the definition of the Compute Platform in the 21C?
## By Ian Phillips, Uo.Liverpool. 25feb14. http://ianp24.blogspot.co.uk/
## Opinions expressed are my own.
This document discusses using the Eclipse integrated development environment to develop software for ARM microcontrollers. It recommends using the free and open-source GNU toolset, including compilers, linkers and debuggers, along with the Eclipse IDE to develop software for the inexpensive Philips LPC2106 ARM microcontroller board. While the free tools are not as full-featured as commercial development packages that cost thousands of dollars, they provide a low-cost option for students and hobbyists to begin experimenting with 32-bit ARM development. The document also recommends the inexpensive Olimex LPC2106 boards as hardware to use with the free software tools.
This document discusses stack-based buffer overflows, including:
- How they occur when a program writes outside a fixed-length buffer, potentially corrupting data or code.
- Their history and use in attacks like the 2001 Code Red worm.
- Technical details like how the stack and registers work.
- Career opportunities in security analysis and development to prevent and respond to such vulnerabilities.
- The ethical responsibilities of developers to write secure code and disclose vulnerabilities responsibly.
Will ARM be the new Mainstream in our Data Centers? @Rejekts Paris 2024Tobias Schneck
As I have been working with my new Apple Mac M1 for over a year, I was wondering why ARM is not used more in regular application workload scenarios? ARM for desktop computing is really stable, seamless, reliable and for me a game changer - when will we recognize the same for our servers? Especially in times when energy and raw materials are expensive, we should also benefit from the efficiency of ARM technology in our Data Centers. So what’s missing?
We have Kubernetes on ARM, images!
ADITECH CUSTOMER MEET-2015 was held at Hotel RAMADA, Millennium Business Park, Navi Mumbai. This event was sponsored by Intel and Innodisk Taiwan, Event was attended by 39 System Integrator partners from Mumbai, Pune, Delhi, Surat and Banglore. Intel has presented the IOT opportunities for SME. Innodisk has enlightened SI partners on latest technologies used in Industrial grade SSD. Aditech has demonstrated Industrial grade solutions and transportation solutions. Aditech's presentation was on Industrial grade Panel PC's and industrial communication. The event is ended with lucky draw and group photograph followed by networking dinner and ADITECH office visit
The document summarizes the dangers of insecure IP cameras and how many are exposed online. It describes how hackers found a vulnerability in TRENDnet cameras in 2012 that allowed unauthorized access. It then discusses how tools like SHODAN can be used to search for and enumerate additional exposed cameras online, noting that many contain high-definition video feeds showing sensitive locations and activities.
Erlang is a programming language used to build distributed, fault-tolerant systems. It was developed in the late 1980s at Ericsson for building telecom applications. Erlang uses lightweight processes to model systems and provides features like process isolation, message passing, distribution, and hot code swapping. The Open Telecom Platform (OTP) is a set of libraries and design principles for building robust Erlang applications. Many companies use Erlang for building scalable and reliable systems, including WhatsApp, Klarna, and Ericsson.
Philippe Coval gave a presentation on prototyping IoT devices using GNU/Linux and the IoTivity framework. The presentation covered initializing IoTivity clients and servers, registering resources, discovering resources over the network, and implementing basic GET and POST operations to control resources representing physical devices. Code examples were provided using both C++ for Linux systems and C for resource-constrained MCUs.
This document discusses building resource efficient distributed systems at scale. It covers several key lessons:
1) Understand deeply the relationship between latency, bandwidth, and capacity across infrastructure levels as bandwidth increases faster than latency and the gap between bandwidth and storage capacity widens over time.
2) Distributed systems fundamentally deal with distance and having multiple components, so failure is expected. However, developing distributed applications should be similar to non-distributed ones by concealing complexity.
3) Leverage cheaper processors from the consumer device market which have better price/performance than servers and reduce power costs significantly. Automation can also reduce people costs dominating large data centers.
The document contains questions from various rounds of a quiz on slightly geeky topics related to IT, music, general computing knowledge, and jobs. The rounds cover warm-up questions, MP3s and music, TCP/IP, Linux, viruses, project management, networking, and software testing.
Hands-On Workshop on Performance Optimization for Intel Xeon Phi Processor Family x200 (formerly Knights Landing) from Colfax International. More information at http://colfaxresearch.com/knl-webinar/
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
UNLOCKING HEALTHCARE 4.0: NAVIGATING CRITICAL SUCCESS FACTORS FOR EFFECTIVE I...amsjournal
The Fourth Industrial Revolution is transforming industries, including healthcare, by integrating digital,
physical, and biological technologies. This study examines the integration of 4.0 technologies into
healthcare, identifying success factors and challenges through interviews with 70 stakeholders from 33
countries. Healthcare is evolving significantly, with varied objectives across nations aiming to improve
population health. The study explores stakeholders' perceptions on critical success factors, identifying
challenges such as insufficiently trained personnel, organizational silos, and structural barriers to data
exchange. Facilitators for integration include cost reduction initiatives and interoperability policies.
Technologies like IoT, Big Data, AI, Machine Learning, and robotics enhance diagnostics, treatment
precision, and real-time monitoring, reducing errors and optimizing resource utilization. Automation
improves employee satisfaction and patient care, while Blockchain and telemedicine drive cost reductions.
Successful integration requires skilled professionals and supportive policies, promising efficient resource
use, lower error rates, and accelerated processes, leading to optimized global healthcare outcomes.
Use PyCharm for remote debugging of WSL on a Windo cf5c162d672e4e58b4dde5d797...shadow0702a
This document serves as a comprehensive step-by-step guide on how to effectively use PyCharm for remote debugging of the Windows Subsystem for Linux (WSL) on a local Windows machine. It meticulously outlines several critical steps in the process, starting with the crucial task of enabling permissions, followed by the installation and configuration of WSL.
The guide then proceeds to explain how to set up the SSH service within the WSL environment, an integral part of the process. Alongside this, it also provides detailed instructions on how to modify the inbound rules of the Windows firewall to facilitate the process, ensuring that there are no connectivity issues that could potentially hinder the debugging process.
The document further emphasizes on the importance of checking the connection between the Windows and WSL environments, providing instructions on how to ensure that the connection is optimal and ready for remote debugging.
It also offers an in-depth guide on how to configure the WSL interpreter and files within the PyCharm environment. This is essential for ensuring that the debugging process is set up correctly and that the program can be run effectively within the WSL terminal.
Additionally, the document provides guidance on how to set up breakpoints for debugging, a fundamental aspect of the debugging process which allows the developer to stop the execution of their code at certain points and inspect their program at those stages.
Finally, the document concludes by providing a link to a reference blog. This blog offers additional information and guidance on configuring the remote Python interpreter in PyCharm, providing the reader with a well-rounded understanding of the process.
Batteries -Introduction – Types of Batteries – discharging and charging of battery - characteristics of battery –battery rating- various tests on battery- – Primary battery: silver button cell- Secondary battery :Ni-Cd battery-modern battery: lithium ion battery-maintenance of batteries-choices of batteries for electric vehicle applications.
Fuel Cells: Introduction- importance and classification of fuel cells - description, principle, components, applications of fuel cells: H2-O2 fuel cell, alkaline fuel cell, molten carbonate fuel cell and direct methanol fuel cells.
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELgerogepatton
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
The CBC machine is a common diagnostic tool used by doctors to measure a patient's red blood cell count, white blood cell count and platelet count. The machine uses a small sample of the patient's blood, which is then placed into special tubes and analyzed. The results of the analysis are then displayed on a screen for the doctor to review. The CBC machine is an important tool for diagnosing various conditions, such as anemia, infection and leukemia. It can also help to monitor a patient's response to treatment.
1. 1
Stronger than its weakest link
High Integrity So.ware Conference (HIS'15)
5nov15: Bristol.
Pdf & SlideCast @ hCp://ianp24.blogspot.com
Opinions expressed are my own ...
Prof. Ian Phillips
Principal Staff Engineer
ARM Ltd
ian.phillips@arm.com
Visiting Prof. at ...
Contribution to
Industry Award 2008
2v1
2. 2
High Integrity Software !?
..Or..
"The scienNfic method assumes that a system with perfect integrity yields a singular
extrapolaNon within its domain that one can test against observed results" (Wikipedia)
§ Is So.ware the weakest link in
High Integrity Systems ?
§ Such that improving it is all that's
necessary to produce High
Integrity Systems?
§ When we say So.ware are we are
actually thinking ComputaNon?
§ But Computa;on is about results not about
implementa;on technologies!
3. 3
We know what Proper Computing is...
§ HPC and Mainframe ... maybe Worksta2on
§ But not really Laptop or ... (Heaven forbid) a Pocketable?
4. 4
Graham's Orrery - c1700
§ A machine to Compute the posi;on of the planets
§ Single-Task, Con;nuous Time, Analogue, Mechanical, Computer (With backlash!)
George Graham. Clock-Maker (1674-1751)
5. 5
Amsler’s Planimeter - c1856
Planimeter 2015 !
§ A Machine for Compu;ng the Area of an arbitrary 2D shape
§ Technology: Precision Mechanics, Analogue
§ Available today ... Electronically enhanced
Jakob Amsler-Laffon. Mathematician,
physicist, engineer (1823-1912)
6. 6
IN (x)
Enumerated
Phenomena
OUT (y)
Processed Data/
Information
y=F(x)
§ State (s) and Time (t) are implicit or explicit variables in this
§ And so are Accuracy (a), Reliability (r) and Cost ($)
§ All of which can be balanced (Architected) to meet End-Customer needs
§ Exceeding needs almost always 'costs' more!
... Technologies and Methodologies just offer 'star$' op;ons over basic func;onality
... Not all of which will be commercially valuable
Computing is solving a Model of a Subset of Reality ...
Fast enough to be useful and affordable by its customer
y=F(x,s,t,a,r,$)
10. 10
The Invisible Face of Computing Today
Unrecognised but Vital ... All need to be Dependable
11. 11
The Visible Face of Computing Today
EssenNal but not Vital ... But BIG-BIG-BIG $
12. 12
§ Digital Electronics
§ So.ware
§ Memory
§ OpNcs
§ Analogue Electronic
§ Sensors/Transducers
§ Mechanics
§ Micro-Motors
§ Displays
§ Discharge Tube
§ RoboNc Assembly
§ PlasNc, Metal, Glass
Input: Image(Light) => Compute (Process Image) => Output: SD Card (Electrons)
... Many Technologies seamlessly coopera;ng, to Enhance Human Memory
... Tradi;onal siloes (inc. SW and HW) are just a means to this end!
Electronic System (Cyber-physical System) - c2015
Incorporating DIGIC5+ (ARM)
System-Level
Computation
‘Classic’
Computer
13. 13
Human Population
Computing for the Masses ...
... Technology Products are Increasingly ‘Intelligent’
1970 1980 1990 2000 2010 2020 2030
Main Frame
Mini Computer
Personal Computer
Desktop Internet
Mobile Internet
Millionsof
Units
1st Era
Select work-tasks
2nd Era
Broad-based computing
for specific tasks
3rd Era
Computing as part
of our lives
Technology is the Driver
Consumer is the Driver
... Old Markets are s;ll there; but don't drive the Technology today!
15. 15
Typical 2015 Computing Platform
Exynos 5422
Eight 32 bit CPUs (big.LITTLE):
• Four big (2.1GHz ARM A15) for
heavy tasks;
• Four small (1.5GHz ARM A7)
for lighter tasks.
+ Nine Mali GPU cores ...
... A ~30 Core Heterogeneous Mul;-Processor ... In your Shirt Pocket!
One Board ...
21 significant ‘Chips’
16. 16
2010:Apple’s A4 SIP Package (Cross-sec;on)
IC Packaging Technology
§ The processor is the centre rectangle. The silver circles beneath it are solder balls.
§ Two rectangles above are RAM die, offset to make room for the wirebonds.
§ Pufng the RAM close to the processor reduces latency, making RAM
faster and reduces power consumpNon ... But increases cost.
§ Memory: Unknown
§ Processor: Samsung/Apple (ARM Processor)
§ Packaging: Unknown (SIP Technology)
Source ... http://www.ifixit.com
Processor SOC Die
2 Memory Dies
Glue
Memory
‘Package’
4-Layer Platform
Package’
Steve Jobs WWDC 2010
17. 17
2013: Samsung Solid-State Memory
§ Smart Memory (eMMC)
§ 16-128Gb in a single package
§ 8Gb/die. Stacked 2-16 die/package
§ Handles errors in the API (Smart Interface)
§ Package just 1.4mm thick! (11.5x13x1.4mm)
... Smaller than a postage stamp
19. 19
§ They sell things that Their Customers desire and can afford
§ To sa;sfy the End-Customers needs ... In an End-Product which may be several ‘layers’ above them.
§ Focus on their Core Competencies as a Component Provider in a Global Market
§ Avoid CommodiNsaNon by DifferenNaNon
§ Improved Cost and Quality (by improving Process) ..and..
§ Improved Business-Models (which make the Money) ..and..
§ Improved Func;onality (by new Technology and Methods)
§ But New Product Development is a Cost and a Risk to be Minimised
§ Technology (HW, SW, Mechanics, Op;cs, Graphene, etc) just enables Op;ons!
§ New-Technology may cost more (including risk) than it delivers in Product Value!
§ Over-Design costs ... Business can't afford the Precau;onary Principle!
... Because successful End-Products fund their en;re (RD&I) Value-Chains
... Reuse of their Technologies become economic necessity in other markets!
Computing Technologies in Business Context
Businesses have to be Competitive, Money Making Machines today ...
20. 20
Component and Sub-Systems from Global Enterprise ...
... Global Teams contributing Specialist Knowledge & Knowhow
§ Apple ID’d 159 Tier-1 Suppliers ...
§ Thousands of Engineers Globally
§ Est. 10x Tier-2 Suppliers ...
§ Including Virtual Components1 and
Sub-Systems (ARM and other IP Providers)
§ Mul;ple Technologies ...
§ Hardware, Sojware, Op;cs,
Mechanics, Acous;cs, RF, Plas;cs, etc
§ Manufacturing, Test, Qualifica;on,
etc.
§ Methods, Tools, Training, etc
§ Tens of thousands Engineers Globally
... More than 90% of Technology and
Methods are Reused (produc;vity)!
1: Virtual Components do not appear on BOM
21. 21
§ But the only way to economically realise this potenNal is by product evoluNon;
reusing and reusing again the work of our technical predecessors ...
§ Hardware, SoHware and other Technologies; Methods and Tools; and throughout the stack
§ In-Company: Sourced and Evolved from Predecessor Products
§ Ex-Company: Sourced from businesses with Specialist Knowledge/Experiance
§ Reuse Improves Quality; as objects are designed more carefully, and bug-fixes are incremental
§ Reuse Improves ProducLvity; as objects can be deployed without understand their implementa;on
technology (or its limita;ons)
... It delivers working systems quickly with finite teams; but the dependability cannot be quan;fied!
... Despite this, Commercial Technologies will be used in Systems on which people Depend
§ The cost of alternaLves will be several orders of magnitude too great
§ The issue is (just) making dependable systems using undependable components
Designer Productivity has become the Limiting Factor
The Customer Expectation of the Billions of available Transistors is irresistible!
22. 22
ARM: Delivers Reuse-Based Productivity ...
.... 24 Processors in 6 Families for different Applica;on Domains
About 50MTr
About 50KTr
23. 23
...Tools to create optimal Hetrogeneous Multi-Processors ...
ACE
ACE
NIC-400 Network Interconnect
Flash GPIO
NIC-400
USBQuad
Cortex-
A15
L2 cache
Interrupt Control
CoreLink™
DMC-520
x72
DDR4-3200
PHY
AHB
Snoop
Filter
Quad
Cortex-
A15
L2 cache
Quad
Cortex-
A15
L2 cache
Quad
Cortex-
A15
L2 cache
CoreLink™
DMC-520
x72
DDR4-3200
8-16MB L3 cache
PCIe
10-40
GbE
DPI Crypto
CoreLink™ CCN-504 Cache Coherent Network
IO Virtualisation with System MMU
DSP
DSP
DSP
SATA
Dual channel
DDR3/4 x72
Up to 4 cores
per cluster
Up to 4
coherent
clusters
Integrated
L3 cache
Up to 18 AMBA
interfaces for
I/O coherent
accelerators
and IO
Peripheral address space
Heterogeneous processors – CPU, GPU, DSP and
accelerators
Virtualized Interrupts
Uniform
System
memory
24. 24
… Other Tools, Libraries and Partners to Realize the Potential
§ Technology to build Electronic System solu2ons:
§ SoHware, Drivers, OS-Ports, Tools, ULliLes to create
efficient system with op;mized sojware solu;ons
§ Diverse Physical Components, including CPU and GPU
processors designed for specific tasks
§ Interconnect System IP delivering coherency and the
quality of service required for lowest memory bandwidth
§ OpLmised Cell-Libraries for a highly op;mized SoC
implementa;ons
§ Well Connected to Partners in the Life-Cycle:
§ For complementary tools and methods required by
System Developers
§ Global Technology Global Partners:
§ >900 Licences; Millions of Developers
25. 25
Are the Outcomes of this 'chain' Dependable?
Evidently so:They are Functional and Dependable enough to satisfy Billions/yr!
(2Q2015)
Smart-Phone shipments 2Q15 - 185 million (~0.75B/yr)
... The probability of a 'fairly reliable' systems failing, when you need to use it
for 'improbable' event, is 'highly improbable' ... And mostly this is enough
26. 26
‘OpNmal’ Plaporm
HW1" HW2" HW3" HW4"
Hardware Interface"
RTOS/Drivers"
Thread"
Bus(es) Processor(s)
F1"
F2"
F3"
F4"
F5"
Create FuncNonal-Model1 on a 'Generic' Plaporm
(F1)! (F3)!
(F5)!(F2)!
Evolving the Model (& Plaporm) unNl FuncNonal
and Non-FuncNonal, Performance is Adequate.
NOTE: 'Final SW' is sNll a Model of Behaviour!
Design is Transforming a Model of Behaviour ...
... evolving a Mathematical Model to meet Non-Functional Constraints
Transform to a FuncNonal-Model on an 'OpNmal' (HW/SW) Plaporm
1: This includes a Model of Execu;on such as a Java VM.
27. 27
§ All models are a simplificaNon of reality; therefore they all have limitaNons
§ "All models are wrong, but some are useful" (G.E.Box)
§ Normal So.ware Design Methods are create-it-wrong, test-it-right ...
§ Quality is established by Test; and bug-fixes/patches in the field (An inherently poor method)
§ Sojware Reuse offers hugely improved ProducLvity (Not-using it is not an op;on)
§ Sojware Reuse offers improved Quality (But over what?)
§ ExaminaNon shows that all code has high residual errors ...
§ Well structured and tested Source-Code has ~5 errors per 1,000 lines of code (E-KLOC)
§ Commercial code is typically ~5x worse than this
§ Most errors are harmless – But there is no useful correla;on
§ Formal-Methods are beRer; but cost is high if you can't uNlise (normal) legacy code.
§ But Even 'Perfect-Sojware' s;ll has to execute on an Imperfect-Plauorm
... "YES!": But Good-Enough sa;sfies the Commercial Impera;ve for most applica;ons
Is Software (Logic) Inherently Undependable?
Software is a Model of Reality, executing on a Hardware and Software Platform
28. 28
Open Source is Dependable?
"Somebody will see the bugs!" (But only if they look!)
1: http://www.wired.com/2014/04/heartbleedslesson/
2: http://veridicalsystems.com/blog/of-money-responsibility-and-pride/
“It is now very clear that
OpenSSL development could
benefit from dedicated full-Nme,
properly funded developers”
“OSF typically receives only
$2,000 a year in donaNons”
§ OpenSSL HeartBleed bug (2014) 1
§ Update was received just before a Public Holiday
§ Editor was a known and high-quality source
§ Code was reviewed informally and released
§ Editor was conflicted with day-job, family and holiday pressure 2
§ Too lixle resources to do a proper job.
§ This was a classic E-KLOC error ...
§ Not a Coding, Formayng, or Func;onal error
§ It was a System error (an omission in a non-func;onal aspect of the code).
... Was the ‘fault’ with the sojware Source (OpenSSL Sojware Founda;on (OSF)) ?
... Or a User Community too-ready to believe in the Myth of Open Source sojware?
... Or was it caused because its Source was open to examina;on?
30. 30
MiNgaNng this we have ...
§ Weak Transistors: Not all ...
§ Are at 70 degC even if the die is (But some will be higher)
§ Are Minimum Size (Larger ‘area’ reduces variability)
§ Are on Cri;cal Paths; and the probability of there being more than one on a path is low!
§ CMOS Logic: Is very robust and will conNnue to funcLon with out-of-spec transistors
§ Leaky Gates and Faster Transi;ons are seldom func;onal failures (but they do hit reliability!)
§ Speed varia;ons on a path average out (on average!)
§ Errors are frequently difficult to detect (and thus correct!)
§ Memory: Analogue Circuits are much more sensiNve to transistor variaNon. But ...
§ Failures are easier to detect (and work around)
§ Spare rows/columns are included to fix manufacturing (sta;c) defects ... but not dynamic (use)
§ NV-M limited write-cycles and bit failures are shielded by their smart API ... to some degree.
... Hardware failure is not always easily spoxed at the func;onal level!
So is Hardware (Logic) Dependable? 2/3
31. 31
§ And we haven't included imponderables ...
§ Internally and Externally generated noise? (Greater suscep;bility at lower voltages)
§ High-energy par;cles? (Greater suscep;bility at smaller geometries)
§ Wear-out: Vt/Gain drij and Electro Migra;on? (Greater suscep;bility at smaller geometries)
§ Local Hot-Spots? (140C is not uncommon on chip)
§ Limita;ons of Verifica;on and Test (State-Space explora;on is always a sub-set)
§ We are repeatedly mulNplying Nny-improbables, by ever larger-numbers ...
§ And many of the values are only guesses!
§ We have no real idea about the reliability/dependability of modern Systems or Components
§ But we know that as process geometries shrink, SuscepNbility will get worse ...
§ Chips will get ever more complex (and more chips will be used in more complex Systems)
§ Transistors will get smaller and Designers will erode safety margins to get performance
... Despite this; Chips and Systems do Yield more than we would rightly expect ...
... So we must be u;lising Unknown Safety Margins!
So is Hardware (Logic) Dependable? 3/3
32. 32
Killing a Sacred Cow: SW and HW Logic are the Same
...They have different characteristics, so choice is a System Architectural decision!
// A master-slave type D-Flip Flop
module flop (data, clock, clear, q, qb);
input data, clock, clear;
output q, qb;
// primitive #delay instance-name
// (output, input1, input2, .....),
nand #10 nd1 (a, data, clock, clear),
nd2 (b, ndata, clock),
nd4 (d, c, b, clear),
nd5 (e, c, nclock),
nd6 (f, d, nclock),
nd8 (qb, q, f, clear);
nand #9 nd3 (c, a, d),
nd7 (q, e, qb);
not #10 inv1 (ndata, data),
inv2 (nclock, clock);
endmodule
'Hardware' Language (Verilog) 'Software' Language (C)
#include<time.h>
/* Use the PC's timer to check */
/* processing time */
main()
{
clock_t time,deltime;
long junk,i;
float secs;
LOOP:
printf("input loop count: ");
scanf("%ld",&junk);
time = clock();
for(i=0;i<junk;i++)
deltime = clock() - time;
secs = (float) deltime/CLOCKS_PER
printf("for %ld loops, #tics = %
%fn",junk,deltime,secs);
goto LOOP;
...
Target Platform
CMOS -------- CPU
Target Architecture Info
Compilers
HW ----------- SW
Configuration Files
HW -------------- SW
35. 35
§ System-Level Dependability is what maCers ...
§ Component and Sub-System dependability is inherently poor (and will get worse).
§ ProducNvity demands that Dependable Systems must Reuse Components and Sub-
Systems (Physical and Virtual); and the affordable ones are of Commercial quality!
§ Clean-Sheet design is not an op;on for almost all complex products!
... the cost-is-no-object customer is an endangered specie
§ Increasing the Dependability of Components and Sub-Systems helps; but can never be enough
§ ARM product is really; 'Enhanced Reuse for Electronic System Design and Manufacture'
... The Only Place to implement System-Level Dependability on an Undependable
Plauorm, is at the System-Layer!
§ Reliable components and sub-systems will help, but cannot ever be enough
§ Predominantly a 'So.ware' challenge; but not alone (Don't forget the simple Watch-Dog)
Dependable on Undependable
Any Methods that are based on perfection in HW or SW are untenable ...
36. 36
The Real Conclusions
§ Systems are what End-Customers buy; they expect them to be Dependable Enough
§ A subjec;ve concept; which is Applica;on, State and Context dependent (& Technology independent)
§ Commercial Components (HW/SW) will be the building blocks of Dependable Systems
§ Commercial use gives us the Technologies which we are economically bound to use today
§ Though they work bexer than we would rightly expect, we cannot quan;fy their quality
§ Improving their Quality/Reliability/Dependability helps; but 100% is an asympto;c goal!
§ The System Knows what the System Wants
§ So: System behaviour and robustness must be handled at the System-Level (Top-Level);
only it can know the expected ac;on and appropriate correc;ve ac;on for its domain.
§ And: Because of the size of the Func;onal and Non-Func;onal Space, conformance cannot be
measured; so it will require a Policy Based approach.
... Meanwhile systems that people depend on will be produced
... The Commercial Impera;ve can’t/won't wait for the 'right methodology'
37. 37
The END IsVery Nigh ...
Pdf & SlideCast @ http://ianp24.blogspot.com