Transcript of a sponsored discussion on how UNIX has evolved in the 20-year history of UNIX and the role of The Open Group in maintaining and updating the standard.
Using Testing as a Service, Globe Testing Helping Startups Make Leap to Cloud...Dana Gardner
Transcript of a Briefings Direct podcast on how Globe Testing is pushing the envelope on Agile development and applications development management using HP tools and platforms.
Need for Fast Analytics Across All Kinds of Healthcare Data Spurs Converged S...Dana Gardner
Transcript of a sponsored discussion on how a triumvirate of big players have teamed to deliver a rapid and efficient analysis capability across disparate data types for the healthcare industry.
Using Testing as a Service, Globe Testing Helping Startups Make Leap to Cloud...Dana Gardner
Transcript of a Briefings Direct podcast on how Globe Testing is pushing the envelope on Agile development and applications development management using HP tools and platforms.
Need for Fast Analytics Across All Kinds of Healthcare Data Spurs Converged S...Dana Gardner
Transcript of a sponsored discussion on how a triumvirate of big players have teamed to deliver a rapid and efficient analysis capability across disparate data types for the healthcare industry.
A Tale of Two IT Departments, or How Governance is Essential in the Hybrid Cl...Dana Gardner
Transcript of a Briefings Direct discussion on how two organizations have been improving their application’s performance via total performance monitoring and metrics.
Intralinks Uses Hybrid Computing to Blaze a Compliance Trail Across the Regul...Dana Gardner
Transcript of a sponsored discussion on how regulations around data sovereignty are forcing enterprises to consider new approaches to data, intellectual property, and cloud collaboration services.
Using a Big Data Solution Helps Conservation International Identify and Proac...Dana Gardner
Transcript of a BriefingsDirect podcast on how a conservation group, partnering with HP, is bringing real-time environmental data into the hands of policy decisions-makers.
Rolta AdvizeX Experts on Hastening Time to Value for Big Data Analytics in He...Dana Gardner
Transcript of a sponsored discussion on using the right balance between open source and commercial IT products to create a big data capability for the long-term.
How Big Data Generates New Insights into What’s Happening in Tropical Ecosyst...Dana Gardner
Transcript of a sponsored discussion on how large-scale monitoring of rainforest, biodiversity and climate has been enabled and accelerated by cutting-edge, big-data capture, retrieval and analysis.
SAP Ariba Chief Strategy Officer on The Digitization of Business and the Futu...Dana Gardner
Transcript of a sponsored discussion on how advancements in business applications and the modern infrastructure that supports them portends new and higher degrees of business innovation.
How Unisys and Microsoft Team Up To Ease Complex Cloud Adoption For Governmen...Dana Gardner
A discussion how public and private sector IT organizations can ease cloud adoption using cloud-native apps, services modernization, automation, and embedded best practices.
How HTC Centralizes Storage Management to Gain Visibility, Reduce Costs and I...Dana Gardner
Transcript of a Briefings Direct podcast on why bringing a common management view in to play improves problem resolution and automates resource allocation more fully.
How New Technology Trends Will Disrupt the Very Nature of Business Dana Gardner
Transcript of a sponsored discussion on how major new trends and technology are translating into disruption, and for the innovative business -- opportunity.
How INOVVO Delivers Analysis that Leads to Greater User Retention and Loyalty...Dana Gardner
Transcript of a sponsored discussion on how advanced analytics drawing on multiple data sources provides wireless operators improved interactions with their subscribers and enhances customer experience through personalized insights.
A Tale of Two IT Departments, or How Governance is Essential in the Hybrid Cl...Dana Gardner
Transcript of a Briefings Direct discussion on how two organizations have been improving their application’s performance via total performance monitoring and metrics.
Intralinks Uses Hybrid Computing to Blaze a Compliance Trail Across the Regul...Dana Gardner
Transcript of a sponsored discussion on how regulations around data sovereignty are forcing enterprises to consider new approaches to data, intellectual property, and cloud collaboration services.
Using a Big Data Solution Helps Conservation International Identify and Proac...Dana Gardner
Transcript of a BriefingsDirect podcast on how a conservation group, partnering with HP, is bringing real-time environmental data into the hands of policy decisions-makers.
Rolta AdvizeX Experts on Hastening Time to Value for Big Data Analytics in He...Dana Gardner
Transcript of a sponsored discussion on using the right balance between open source and commercial IT products to create a big data capability for the long-term.
How Big Data Generates New Insights into What’s Happening in Tropical Ecosyst...Dana Gardner
Transcript of a sponsored discussion on how large-scale monitoring of rainforest, biodiversity and climate has been enabled and accelerated by cutting-edge, big-data capture, retrieval and analysis.
SAP Ariba Chief Strategy Officer on The Digitization of Business and the Futu...Dana Gardner
Transcript of a sponsored discussion on how advancements in business applications and the modern infrastructure that supports them portends new and higher degrees of business innovation.
How Unisys and Microsoft Team Up To Ease Complex Cloud Adoption For Governmen...Dana Gardner
A discussion how public and private sector IT organizations can ease cloud adoption using cloud-native apps, services modernization, automation, and embedded best practices.
How HTC Centralizes Storage Management to Gain Visibility, Reduce Costs and I...Dana Gardner
Transcript of a Briefings Direct podcast on why bringing a common management view in to play improves problem resolution and automates resource allocation more fully.
How New Technology Trends Will Disrupt the Very Nature of Business Dana Gardner
Transcript of a sponsored discussion on how major new trends and technology are translating into disruption, and for the innovative business -- opportunity.
How INOVVO Delivers Analysis that Leads to Greater User Retention and Loyalty...Dana Gardner
Transcript of a sponsored discussion on how advanced analytics drawing on multiple data sources provides wireless operators improved interactions with their subscribers and enhances customer experience through personalized insights.
Interactive data visualization products focused on business intelligence. Data Visualization and Communication. Tableau is considered a leader in the field of data discovery.
Tableau products are designed and built to meet the critical needs of the digital forensic community world-wide.
Moving to the cloud requires proper planning. There are many reasons to move to the cloud but without starting with a solid plan it will be difficult to achieve your objectives.
Are you prepared for the cloud?
In this webinar, Portal Solutions’ Daniel Cohen-Dumani and Dale Tuttle discuss Making your Cloud Strategy Work in 2016, including:
- Learning the key steps to making a cloud strategy that works
- Determining how moving to the cloud impacts your staff & your business
- Learning how to align your goals with opportunities
The Oracle GoldenGate software package delivers low-impact, real-time data integration and transactional data replication across heterogeneous systems for continuous availability, zero-downtime migration, and business intelligence.
Join the Webinar to learn Golden Gate 12c New Features
• Expanded heterogeneous Support
• Multitenant Container Database (CDB) Support
• Oracle Universal Installer (OUI) Support
• Support for Public and Private Clouds
• Integrated Replicat
• Security
• Coordinated Replicat
• New 32K VARCHAR2 Support
• High Availability (HA) enhancements
• Support for Other Oracle products
• Improvements to feature Functionality
Laravel Forge: Hello World to Hello ProductionJoe Ferguson
With the recent release of Laravel Forge, Envoyer and Homestead, it has never been easier to go from nothing to something with an easy to use PHP Framework. This talk will cover creating a basic Laravel application using the Laravel specific Vagrant box "Homestead", connecting to a server (Linode, Rackspace, Digital Ocean), and deploying the application via Forge. The talk will also cover tips and tricks on customizing Homestead to fit custom needs as well as how to use Forge & Envoyer to deploy new versions of our application.
Using the Amazon cloud requires a lot of moving parts like AMIs, ASGs, and ELBs. See how a small Netflix team developed web-based tools to abstract and clarify these cloudy components for use by hundreds of engineers.
Presented at "Talk Cloudy to Me II" hosted by the Silicon Valley Cloud Computing Group in 2011.
Apple Keynote version with animations is on Google Docs at http://bit.ly/netflixcloudtools
A Roadmap for Students Using FOSS (Free and Open Source Software) and Reachin...PK Mishra
The talk opens with the usefulness of Open Source Software in the context of evolution of UNIX and Linux. It goes on to suggest and agenda for students who can enhance their abilities and contribute towards making themselves and India self-reliant. A few useful suggestions as action points have also been made.
HPE’s Erik Vogel on Key Factors for Driving Success in Hybrid Cloud Adoption ...Dana Gardner
A discussion on innovation around maturing hybrid cloud models and how proper common management of hybrid cloud operations makes or breaks the expected benefits.
1) Operating systems provide a platform where there is strategicAgripinaBeaulieuyw
1) Operating systems provide a platform where there is strategic management of both hardware and software to improve computer performance. Different operating systems achieve diverse user needs which provide a greater emphasis on important aspects that make the existing different operating systems unique and different. Silberschatz et al. (2018) assert that the coordination of various programs within a computer system is based on the efficiency of the underlying operating system. The currently existing operating systems use a graphical user interface which hides complex applications that are executed in the background making it possible and easy for a user to use a computer system without having to understand the complex programs and processes that are run.
Linux is one of the existing open source operating systems in the world. The open source element means that the operating system can be modified as well as distributed by anyone without any limitations as per the GNU licenses. This operating system is free and is available in different versions where a user decides a specific version depending on the ease to use based on their skills. Linux is just a kernel, and the Linux distribution makes it complete and easy to use.
Mac OS X is an operating system that was developed by Apple, and the company holds all exclusive rights to the system which include selling, distribution, and update. The system comes pre-loaded in all Macintosh computers. The company has been producing updated versions regularly with the latest El Capitan released in 2015. According to Peter et al. (2016), Mac OS X users account for less than 10% globally. One of the reasons that explain this trend is due to the high cost of Apple computers which makes it less preference for most people.
2) Mobile technologies such as tablet and smartphone use operating systems such as Android and iOS. It is, therefore, evident that both Android and iOS are mobile technologies, but what is the difference between them? Android and iOS can be differentiated in many aspects. For instance, the transfer of files. The transfer of files in iOS is more difficult, and files are only transferred by iTunes desktop. On the other hand, file transfer in Android is easier and is done using either the Android File or Transfer desktop app. In both cases, photos are transferred using USB without the apps (Bala, Sharma & Kaur, 2015).
Again, source model and OS family. Android is partly open source and Linux-based, and its fundamental features are more customizable (Bala, Sharma & Kaur, 2015). The uniform design of iOS elements is described as more user-friendly. While Android is partly open source and Linux-based, iOS is closed with components of open source, and it is both OS X- and UNIX-based (Bala, Sharma & Kaur, 2015). Additionally, but not least, jailbreaking, bootloaders, and rooting. According to Bala, Sharma, Kaur (2015), users of Android have complete control and access over their devices, thus ...
The annual NMC Horizon Report describes the continuing work of the NMC Horizon Project, a research-oriented effort that seeks to identify and describe emerging technologies likely to have considerable impact on teaching, learning, and creative expression within higher education over three time horizons — one year or less, two to three years, and four to five years. This presentation gives a visual overview of the contents of The NMC Horizon Report > 2008 Higher Ed Edition. The presentation was shared at conferences all over the world in conjunction with the release of the accompanying report.
POSS2016Nov16-The Open Source Software Value ChainOW2
Answers the following questions :
What are the specifics of the OSS production line? its key constituents? Are open source communities only about technology and ethics or are they also market players?
What are the different aspects of communities engagement with market forces? How do they, or can they ensure they deliver market-ready software? What is “market-readiness”?
Is “OSS product” an oxymoron? What are the specifics of open source product marketing? What are the best practices that ensure OSS market adoption? What end-users should know in order to define C-level open source strategies? What are the dirty little secrets of the open source software production line?
** Linux Admin Certification Training: https://www.edureka.co/linux-admin **
This Edureka PPT on "Linux vs Windows" will help you understand the basic differences between both of these operating systems while giving you an idea about each of the Operating Systems. This also takes you through the features and limitations of both Linux & Windows.
Follow us to never miss an update in the future.
YouTube: https://www.youtube.com/user/edurekaIN
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
Castbox: https://castbox.fm/networks/505?country=in
Innovation in CS/IT via Open Source SoftwareMaurice Dawson
As costs around the world continue to rise for education, institutions must become innovative in the ways they teach and grow students. To do this effectively professors and administrative staff should push toward the utilization of Open Software (OSS) and virtual tools to enhance or supplement currently available tools. In developing countries OSS applications would allow students the ability to learn critical technological skills for success at small fraction of the cost. OSS also provides faculty members the ability to dissect source code and prepare students for low level software development. It is critical that all institutions look at alternatives in providing training and delivering educational material regardless of limitations going forward as the world continues to be more global due to the increased use of technologies everywhere. Doing this could provide a means of shortening the education gap in many countries. Through reviewing the available technology, possible implementations of these technologies, and the application of these items in graduate coursework could provide a starting point in integrating these tools into academia. When administrators or faculty debate the possibilities of OSS, gaming, and simulation tools this applied research provides a guide for changing the ability to develop students that will be competitive on a global level.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
The UNIX Evolution: An Innovative History reaches a 20-Year Milestone
1. The UNIX Evolution: An Innovative History reaches a 20-
Year Milestone
Transcript of a sponsored discussion on how UNIX has evolved in the 20-year history of UNIX
and the role of The Open Group in maintaining and updating the standard.
Listen to the podcast. Find it on iTunes. Get the mobile app. Sponsor: The Open
Group.
Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, your
moderator for today’s panel discussion examining UNIX, a journey of innovation.
We're here with a distinguished panel to explore the 20-year history of the
UNIX standard. Please allow me to introduce our panel: Andrew Josey, Director
of Standards at The Open Group; Darrin Johnson, Director of Solaris
Engineering at Oracle; Tom Mathews, distinguished engineer of Power Systems
at IBM, and Jeff Kyle, Director of Mission-Critical Solutions at Hewlett
Packard Enterprise.
It's not often that you reach a 20-year anniversary in information technology where the relevance
is still so high and the prominence of the technology is so wide. So let me first address a question
to Andrew Josey at The Open Group. UNIX has been evolving during probably the most
dynamic time in business and technology.
How is it that UNIX remains so prominent, a standard that has clung to its roots, with ongoing
compatibility and interoperability? How has it been able to maintain its relevance in such a
dynamic world?
Andrew Josey: Thank you, Dana. As you know UNIX was started in Bell Labs by Thompson
and Ritchie back in 1969. It was a very innovative, a very different approach, an approach that
has endured over time. We're seeing, during that time, a lot of work going on in
different standards bodies.
We saw, in the early '80s, the UNIX wars, almost different fractured versions,
different versions of the operating system, many of them incompatible with
each other and then the standards bodies bringing them together.
We saw efforts such as the IEEE POSIX, and then X/OPEN. Later, The Open
Group was formed to bring that all together when the different vendors realized
the benefits of building a standard platform on which you can innovate.
So, over time, the standards have added more and more common interfaces, raising the bar upon
which you can place that innovation. Over time, we've seen changes like in the mid-'90s, when
there was a shift from 32-bit to 64 bit computing.
Gardner
Josey
2. At that time, people asked, "How will we do that? Will we do it the same way?" So the UNIX
vendors came to what, at that time, was X/OPEN. We had an initiative called the Large File
Summit and we agreed the common way to do that. That was a very smooth transition.
Today, everybody takes it for granted that the UNIX systems are
scalable, powerful, and reliable, and this is all built on that 64-
bit platform and multi-processor and all these capabilities.
That's where we're seeing the standards come in allowing the philosophy, the enduring, adaptable
pace, and that’s the UNIX platform that's relevant today. We're saying it is today’s virtualization,
cloud, and big data, which is also driven by UNIX systems in the back office.
The Open Group involvement
Gardner: So while we're looking at UNIX’s 40-year history, we're focusing on the 20-year
anniversary of the single UNIX specification and the ability to certify against that, and that’s
what The Open Group has been involved in, right?
Josey: We were given the UNIX trademark from Novell back in, I think it was 1993, and at that
point, the major vendors came together to agree on a common specification. At the time, its code
name was Spec 1170. There were actually 1168 interfaces in the Spec, but we wanted to round
up and, apparently, that was also the amount of money that they spent at the dinner after they
completed the spec.
So, we adopted that specification and we have been running certification programs against that.
Gardner: Darrin, with the dynamic nature of our industry now -- with cloud, hybrid cloud,
mobile, and a tightening between development and operations -- how is it that UNIX remains
relevant today, given these things that no one really saw coming 20 years ago?
Darrin Johnson: I think I can speak for everybody here that all our companies provide cloud
services, whether it’s public cloud, private cloud, or hybrid cloud, and whether it’s
infrastructure as a service (IaaS), software as a service (SaaS), or any of the other
as a service options. The interesting thing is that to really be able to provide that
consistency and that capability to our customers, we rely on a foundation and that
foundation is UNIX.
So our customers, even though they can maybe start with IBM, have choice. In
turn, from a company perspective, instead of having to reinvent the wheel all the
time for the customer or for our own internal development, it allows us to focus on
the value-add, the services, the capabilities that build upon that foundation of UNIX.
Johnson
3. So, something that may be 20 years old, or actually 40 years from the original version of UNIX,
has evolved with such a solid foundation that we can innovate on.
Gardner: And what’s the common thread around that relevance remaining? Is it the fact that it is
consistently certified, that you have assurance that what's running in one place will run into
another on any hardware? How is it that the common spec has been so instrumental in making
this a powerful underpinning for so much modern technology?
Josey: A solid foundation is built upon standards, because we can have, like you mentioned,
assurance. If you look at the certification process, there are more than 45,000 test cases that give
assurance to developers, to customers that there's going to be determinism. All of the IT people
that I have talked to say that a deterministic behavior is critical, because when it’s non-
deterministic, things go wrong. Having that assurance enables us to focus on what sits on top of
it, rather than does the ‘ls’ command work right or can we know how much space is in a file
system. Those are givens. We can focus on the innovation instead.
Gardner: Over the past decades, UNIX has found itself at the highest echelon of high-
performance computing in high-performance cloud environments. Then, it goes down to the
desktop as well as into mobile devices, pervasively, and as micro devices, embedded and real-
time computing. How has that also benefited from standard? Is it that common that you'll see a
single code base or consistent code base up and down the spectrum from micro to macro?
Several components
Johnson: If you look at the standard, it contains several components, and it's really modular in
a way that, depending on your need, you can pick a piece of it and support that. Maybe you don't
need the complete operating system for a highly scalable environment. Maybe you just need a
micro-controller. You can pick the standard, so there is consistency at that level, and then that
feeds into the development environment in which an engineer may be developing something.
That scales. Let’s say you need a lot of other services in a large data center where you still have
that consisting throughout. Whether it’s Solaris, AIX, HP-UX, Linux, or even FreeBSD, there's a
consistency because of those elements of the standard.
Gardner: Developers are, of course, essentially making any platform continue over time, the
chicken and the egg relationship, the more apps the more relevant the platform, the stronger and
more pervasive the platform the more likely the apps. So, Jeff, for developers, what are some of
the primary benefits of UNIX and how has that contributed to its longevity?
Jeff Kyle: As was said for developers, it’s the consistency that really matters. UNIX standards
develop and deliver consistency. As we look at this, we talk about consistent APIs, consistent
command line, and consistent integration between users and applications.
4. This allows the developers to focus a lot more on interesting challenges and customer value at
the application and user level. They don’t have to focus so much on interoperability issues
between OSs or even interoperability issues between versions of the single OS. Developers can
easily support multiple architectures in heterogeneous environments, and in today’s virtualized
cloud-ready world, it’s critical.
Gardner: And while we talk about the past story with UNIX, there's a lot of runway to the
future. Developers are now looking at issues around mobile development, cloud-first
development. How is UNIX playing a role there?
Kyle: The development that’s coming out of all of our organizations and more organizations is
focused first on cloud. It’s focused first on fully virtualized environment. It’s not just the
interoperability with applications, but it is the interoperability between, as I said
before, the heterogeneous environments, the multiple architectures.
In the end, customers are still trying to do the same things that they always have.
They're trying to use applications in technology to get data from one place to
another and more effectively and efficiently use that data to make business
decisions. That’s happening more and more "mobile-y," right?
I think every HP-UX, AIX, Solaris, and UNIX system out there is fully connected to a mobile
world and the Internet of Things (IoT). We're securing it more than any customers realize.
Gardner: Tom, let’s talk a little bit about the hardware side and the ability to recognize that cost
and risk have a huge part of decision-making for customers, for enterprises. What is it about
UNIX now and into the future that allows a hardware approach that keeps those costs risks down
that makes that a powerful combination for platform?
Scale up
Tom Mathews: The hardware approach for the UNIX has traditionally been scale up. There
are a lot of virtues and customer values around scale up. It’s a much simpler
environment to administer, versus the scale out environment that’s going to have
a lot more components and complexity. So that’s a big value.
The other core value that is important to many of our customers is that there has
been a very strong focus on reliability, availability, and scalability. At the end of
the day, those three words are very important to our customers. I know that
they're important to the people that run our systems, because having those values
allows them to sleep right at night and have weekends with their families and so
forth. In addition to just running the business, things have to stay up and it has been that way for
a long time, 7×24×365.
Kyle
Mathews
5. So these three elements -- reliability and availability and scalability -- have been a big focus, and
a lot of that has been delivered through the hardware environment, and in addition to the
standards.
The other thing that is critical, and this is really a very important area where the standards figure
in, is around investment protection. Our customers make investments in middleware and
applications and they can’t afford to re-gen those investments continuously as they move through
generations of operating systems and so forth.
The standards play into that significantly. They provide the stable environment. In the standards
test suite right now, there are something like 45,000 tests for testing for standards. So it's
stability, reliability, availability, and serviceability in this investment-protection element.
Gardner: Now, we've looked at UNIX through the lens of developers, hardware, and also
performance and risk. But another thing that people might not appreciate is a close relationship
between UNIX and the advancement of Internet and the World Wide Web. The very first web
servers were primarily UNIX. It was the de-facto standard. And then service providers, those
folks hosting websites were hosting the Internet itself, were using UNIX for performance and
reliability reasons.
So, Darrin, tell us about the network side of this. Why has UNIX been so prevalent along the
way when the high-performance network and then the very important performance
characteristics of a web environment came to bear?
Johnson: Again, it’s about the interconnectedness. Back in my younger years, having to
interface Ethernet with AppleTalk, with picking your various technologies, just the interfacing
took so much time and effort.
Any standard, whether it’s Ethernet or UNIX, helps bring things together in a way that you don’t
have to think about how to get data from one point to another. Mobility really is about moving
data from one place to another in a quick fashion where you can do transactions in microseconds,
milliseconds, or seconds. You want some assurance in the data that you send from one place to
another. But it's also about making sure of, and this is a topic that’s really important today,
security.
Knowing that when you have data going from one point to another point, it's secured and on each
node, or each point, security continues, and so standards and making sure that IBM interacts with
Oracle, interacts with HPE really assures our customers, and the people that don’t even see the
transactions going on, that they can have some level of confidence that they're going to have
reliable, high-performance, and secure network.
6. Standardization certification
Gardner: Well, let’s dig a little bit into this notion of standardization certification putting
things through their paces. Some folks might be impatient going through that. They want to just
get out there with the technology and use it, but a level of discipline and making sure that things
work well can bear great fruit for those who are willing to go through that process.
Andrew, tell us about the standard process and how that’s changed over the past 20 years,
perhaps to not only continue that legacy of interoperability, but perhaps also increase the speed
and the usability of the standards process itself.
Josey: Since then, we've made quite a few changes in the way that we're doing the standards
development ourselves. It used to be that a group of us would meet behind closed doors in
different locations, and there were three of such groups of standard developers.
There was an IEEE group, an X/Open (later to become an Open Group group), and an
International Standards Group. Often, they were same people who had to keep going to these
same meetings, and seeing the same people but wearing different hats. As I said, it was very
much behind closed doors.
As it got towards the end of the '90s, people were starting to say that we were spending too much
money doing the same thing, basically producing a pile of standards that were very similar but
different. So in late 1997-1998, we formed something that we call the Austin Group.
It was basically The Open Group’s members. Sun, IBM, and HP came to The Open Group at that
time, and said "Look, we have to go and talk to IEEE, we have to talk to ISO about bringing all
the experts together in a single place to do the standard. So starting in 1998, we met in Austin, at
the IBM facility -- hence the name The Austin Group -- and we started on that road.
Since then, we developed a single set of books. On the front cover, we stamped the designation
of it being an IEEE standard, an Open Group standard, or an International Standard. So technical
folks only have to go to a single place, do the work once, and then we put it through the adoption
processes of the individual organizations.
As we got into the new millennium, we changed our way as well. We don’t physically go and
meet anywhere, anymore. We do everything virtually and we've adopted some of the approaches
of open source projects, for example an open bug tracker (mantis).
Anybody can access the bug tracker file, file a bug against the standard and see all the comments
that go in against a bug, so we are completely transparent. With the Austin Group, we allow
anybody to participate. You don't have to be a member of IEEE or an international delegate any
more to participate.
7. We've had lot of input and continue to have a lot of input from the open-source community.
We've had prominent members of Linux and Open Source communities such as maintainers of
key subsystems such as glibc command and utilities. They would come to us because they want
to get involved, they see the value in standards.
They want to come to a common agreement on how the shell should work, how this utility
should work, how they can pull POSIX threads and things into their environments, how they can
find those edge cases. We also had innovation from Linux coming into the standard.
In the mid-2000s, we started to look at and say that new APIs in Linux should also be in UNIX.
So in the mid-2000s, we added, I think, four specifications that we developed based on Linux
interfaces from the GNU project. So in the areas of internationalization and common APIs, that’s
one thing we have always wanted to do is to look at raising that bar of common functionality.
Linux and open-source systems are very much working with the standard as much as anybody
else.
Process and mechanics
Johnson: There's something I’d like to add about the process and the mechanics, because in my
organization I own it. There are a couple of key points. One is, it’s great that we have an
organization The Open Group that not only helps create the standard or manage the standard, but
is also developing the test suites for certification. So it’s one organization working with the
community, Austin Group, and of course IEEE and The Open Group members to create a test
certification suite.
If anyone of our organizations had to create or manage that separately, that’s a huge expense.
They do that for them, that’s part of the service, and they have evolved that and it’s grown. I
don’t know what it was originally, but 45,000 tests have grown, and they’ve made it more
efficient in terms of the process. And it’s a collaborative process. If we have an issue, is it our
issues, is it the test read issue. There's a great responsiveness.
So kudos to The Open Group, because they make it easy for us to certify, that’s really our
obligation to get into that discipline, but if we factor it into the typical QA process as we release
the operating system, whether it’s an update or a patch, or whatever, then it just becomes pretty
obvious. The next major release that you want to certify, you've done most of the heavy lifting.
Again, The Open Group makes it really easy to do that.
Mathews: Another element that’s important on this cost point is goes back to the standards and
the cost of doing development. Imagine being a software ISV. Imagine a world where there were
no standards. That world existed at one point in time. What that caused is this, ISVs had to spend
significant effort to port their to each platform.
8. This is because the interfaces and the capabilities on all of those platforms will be different. You
will see difference all of the way across. Now with the standards, of course, ISVs basically
develop for only one platform: the platform defined by the standards.
So that’s been crucial. It’s that the standards have actually encouraged innovation in the software
industry because that just made it easier for developers to develop, and it's less costly for them to
provide their stuff across the broad range of platforms.
So that’s been crucial. We have three people from the major UNIX vendors on the panel, but
there are other players there too, and the standards have been critical over time for everybody,
particularly when the UNIX market was made up of a lot of vendors.
Gardner: So we understand the value of standards and we see the role that a neutral third-party
can play to keep those standards on track and moving rapidly. Are there some lessons from
UNIX of the past 20 years that we can apply to some of the new areas where standards were
needed? I'm thinking about cloud interoperability, hybrid cloud, so that you could run on-
premises and then have those applications seamlessly move to a public cloud environment and
back.
Andrew, starting with you, what it is about the UNIX model and The Open Group certification
and standardization model that we might apply to such efforts as OpenStack, or Cloud Foundry,
or some other efforts to make a seamless environment for the hybrid cloud?
Exciting problem
Josey: In our standards process, we're very much able to take on almost any problem, and this
certainly would be a very exciting problem for us to tackle to bring parties together. We're able to
bring different parties together, looking for commonality to try and build the consensus.
We get people in the room to talk through the different points of view. What The Open Group is
able to do is to provide a safe harbor where the different vendors can come in and not be seen as
talking in an anti-competitive position, but actually discussing the differences and their
implementations and deciding what’s the best common way to go forward who is setting a
standard.
Gardner: Anyone else on the relationship between UNIX and hybrid cloud in the next several
years?
Johnson: I can talk to it a little bit. The real opportunity, and I hope people listening to this, and
especially the OpenStack community listens, is that true innovation can be best done on a
foundation. In OpenStack, it’s a number of communities that are loosely affiliated delivering
great progress, but there is interoperability, and it’s not with intent, but it’s just people are
moving fast. If some foundation elements can be built, that's great for them because then we, as
9. vendors, can more easily support the solutions that these communities are bringing to us, and
then we can deliver to our customers.
Cloud computing is the Wild West. We have Azure, OpenStack, AWS, and could benefit from
some consistency. Now I know that each of our companies will go to great lengths to make sure
that our customers don't see that inconsistency. So we bear the burden for that, but what if we
could spend more time helping the communities be more successful rather than, as I mentioned
before, reinventing the wheel? There is a real opportunity to have that synergy.
Gardner: Jeff.
Kyle: In hybrid cloud environments, what UNIX brings to customers is security, reliability, and
flexibility. So the Wild West comment is very true, but UNIX can present that secure, reliable
foundation to a hybrid cloud environment for customers.
Gardner: Very good. Lastly, let’s look at this not just through the lens of technology but some of
the more intangible human cultural issues like trust. It seems to me that, at the end of the day,
what would make something successful as long as UNIX has been successful is if enough people
from different ecosystems, from different vantage points, have enough trust in that process, in
that technology, and through the mutual interdependency of the people in that ecosystem to keep
it moving forward. So let’s look at this from the issue of trust and why we think that that's going
to enable a long history for UNIX to come. Why don’t we start with you, Andrew?
Josey: We like to think The Open Group is a trusted party for building standards and that we
hold the specification in trust for the industry and do the best thing for it. We're fully committed
always to continue working in that area. We're basically the secretariat and so we're enabling our
customers to save a lot of cost. We're able to divide up the cost. If The Open Group does
something once, that’s much cheaper than everybody doing the same thing themselves.
Gardner: Darrin, do you agree with my premise that trust has been an important ingredient that
has allowed UNIX to be so successful? How do we keep that going?
One word: Open
Johnson: The foundation of UNIX, even going back to the original development, but certainly
since standards came about is the one word “Open.” You can have an open dialogue to which
anybody is invited. In the case of the Austin Group, it’s everybody. In the case of any of the
efforts around UNIX, it’s an open process, it’s open involvement, and in the case of The Open
Group, which is kind of another open, it’s vendor-neutral. Their goal is to find a vendor-neutral
solution.
Also look at this way. We have IBM, HPE, and Oracle sitting here, and I’ll say virtually Linux.
Other communities that are participating are coming to mutual agreements, and this is what we
believe is best.
10. And you know what, it’s open to disagreement. We disagree all the time, but in the end what we
deliver and execute is of mutual agreement, so it’s open, it’s deterministic, and we all agree on it.
If I were a customer, IT professional, or even a developer, I'd be going, "This foundation is
something on which I want to innovate, because I can trust that it will be consistent." The Open
Group is not going to go away any time soon, celebrating 20 years of supporting the standard.
There's going to be another 20 years.
And the great thing is that there is lot of opportunity to innovate in computer science in general,
but the standard is building that foundation, taking advantage of topics like security,
virtualization, mobility, the list goes on. We even have opportunity to in a open way build
something that people can trust.
Gardner: Tom, openness and trust, good model for the next 20 years.
Mathews: It is a good model. Darrin touched on it. If we need proof of it, we have 20 years in
proof of it. The Open Group has brought together major competitors and, as Darrin said, it’s
always been very open and people have always, even with disagreement come to a common
consensus around stuff. So The Open Group has been very effective establishing that kind of
environment, that kind of trust.
Gardner: I’m afraid we'll have to leave it there. Please join me in thanking our panelists today
for joining this discussion about enabling innovation through UNIX on its 20th anniversary.
Congratulations to The Open Group and to the UNIX community for that.
And also look for more information on UNIX on The Open Group web site,
www.opengroup.org, and thank you all for your attention and input.
Listen to the podcast. Find it on iTunes. Get the mobile app. Sponsor: The Open
Group.
Transcript of a sponsored discussion on how UNIX has evolved in the 20-year history of UNIX
and the role of The Open Group in maintaining and updating the standard. Copyright The Open
Group and Interarbor Solutions, LLC, 2005-2016. All rights reserved.
You may also be interested in:
• Steve Nunn, The Open Group President and CEO, Discusses the Inaugural TOGAF User
Group Meeting and Practical Role of EA in Business Transformation
• A Tale of Two IT Departments, or How Cloud Governance is Essential in the Bimodal IT
Era
• Securing Business Operations and Critical Infrastructure: Trusted Technology,
Procurement Paradigms, Cyber Insurance
11. • Enterprise Architecture Leader John Zachman on Understanding and Leveraging
Synergies Among the Major EA Frameworks
• Cybersecurity standards: The Open Group explores security and safer supply chains
• Explore synergies among major Enterprise Architecture frameworks with The Open
Group
• Health Data Deluge Requires Secure Information Flow Via Standards, Says the Open
Group's New Healthcare Director
• The Open Group Amsterdam Conference Panel Delves into How to Best Gain Business
Value from Open Platform 3.0
• Healthcare Among Thorniest and Yet Most Opportunistic Use Cases for Boundaryless
Information Flow Improvement
• Gaining Dependability Across All Business Activities Requires Standard of Standards to
Tame Dynamic Complexity, Says The Open Group CEO
• Big Data success depends on better risk management practices like FAIR, say conference
panelists