SlideShare a Scribd company logo
1
CCM4902 – Postgraduate Project
Linux on the desktop:
A study into why it has failed to succeed in
capturing desktop market share
Adam Lalani
M00549948
Supervisor: Santhosh Menon
27 September 2016
"A thesis submitted in partial fulfilment of the requirements for the
degree of Master of Science in Computer Network Management."
2
Table of Contents
Abstract...................................................................................................................................................4
List of Figures.........................................................................................................................................5
List of Tables ..........................................................................................................................................5
Introduction.............................................................................................................................................6
Background.........................................................................................................................................6
Problem Statement..............................................................................................................................9
Research Objectives..........................................................................................................................12
Approach...........................................................................................................................................13
Literature Review..................................................................................................................................14
Timeline............................................................................................................................................14
Process (Method for collection) – sources, keywords ......................................................................15
Review of Topics..............................................................................................................................18
Conclusions.......................................................................................................................................22
Output – a simple definition, a conceptual model (Dimensions)......................................................27
Literature Gap...................................................................................................................................27
An Experimental Comparison of Linux and Windows.........................................................................30
Experimental Procedure....................................................................................................................31
Stage 1 - Installation .........................................................................................................................32
3
Stage 2 – Start Up / Shutdown..........................................................................................................32
Stage 3 – I/O Intensive Operations ...................................................................................................33
Stage 4 – Processor Intensive Operations.........................................................................................33
Stage 5 – Power Management...........................................................................................................34
Presentation of Results......................................................................................................................34
Discussion of Results........................................................................................................................36
Interviews with IT Professionals ..........................................................................................................38
Interview Procedure..........................................................................................................................42
Discussion of Results........................................................................................................................43
Conclusion ............................................................................................................................................53
Appendix A - The history of Unix and Unix-like operating systems ...................................................57
Appendix B - Interviews.......................................................................................................................76
Interview 1 – Robert Fitzjohn...........................................................................................................76
Interview 2 – Prasad KM ..................................................................................................................86
Interview 3 – Sanjay Banerjee ..........................................................................................................99
Interview 4 – Renjith Janardhanan..................................................................................................110
Interview 5 – Glen Coutinho...........................................................................................................126
References...........................................................................................................................................136
4
Abstract
The Linux kernel has been wildly successful since its creation in 1991 by Linus Torvalds. Propelled
forward by the diffusion of the Internet and portable devices, Linux is now used in over 1.4 billion
devices – powering inter alia smartphones, tablets, the social media juggernaut Facebook, nuclear
submarines and the International Space Station. Despite this success, it is only used on just 1.74% of
desktop PCs.
Two lines of inquiry were followed to ascertain the reason(s) for Linux’s lack of success on the desktop
– firstly, an experimental comparison between Linux Fedora 24 and Windows 10 was undertaken, in
order to demonstrate that the lack of market share was not as a result of deficiencies of the operating
system itself, and secondly, qualitative interviews were conducted with 5 IT industry professionals with
a combined 96 years of experience – responsible between them for the purchasing, configuration,
support and usage of tens of thousands of PCs during their careers.
The experimental comparison proved that the performance and functionality of Linux is similar enough
to Windows to be discounted as a factor for its lack of adoption on desktop PC, whilst the qualitative
interviews established that the fundamental reason for the lack of success was due to the lack of a ‘killer
app’. Windows has Microsoft Office, but such a ‘killer app’ does not exist on the Linux platform.
Furthermore, the Linux kernel came too late to become widely prevalent during the desktop PC
explosion that began in the early 1990s, whereas its availability at the time of the rise of the Internet era
and the portability revolution allowed it to dominate those market spaces.
In the case of the desktop, it was the right kernel at the wrong time.
5
List of Figures
Figure 1 – Number of Scholarly and Peer-Review Papers on Summon, timeline based......................14
Figure 2 – Keywords Established From Content Analysis...................................................................15
Figure 3 – Classification of final set of publications for literature review ...........................................18
Figure 4 - Output – a simple definition, a conceptual model (Dimensions).........................................27
List of Tables
Table 1 – Desktop Operating System Market Share as at February 2016 (netmarketshare.com) ........10
Table 2 – Mobile/Tablet Operating System Market Share as at February 2016 (netmarketshare.com)11
Table 3 – Final set of publications for literature review.......................................................................17
Table 4 – Content overview of papers used for literature review relating to Linux’s architecture.......19
Table 5 – Content overview of papers used for literature review comparing Linux to Windows/other
operating systems..................................................................................................................................20
Table 6 – Content overview of papers used for literature review concerned with the adoption of
Linux/other open source software.........................................................................................................21
Table 7 – Results of the 5 experimental stages.....................................................................................35
Table 8 - Qualitative Interviews - Definitions and Measurements .......................................................42
6
Chapter 1
Introduction
Background
“…I'm doing a (free) operating system (just a hobby, won't be big and professional like gnu) for
386(486) AT clones…”
(Linus Torvalds, Usenet posting 25 August 1991) (Peng et al, 2014)
On the 25th
August 1991, an unknown 21 year old student at the University of Helsinki in Finland,
named Linus Torvalds, posted on a Usenet forum that he was working on creating and releasing an
operating system kernel based on Minix, a Unix-like operating system (Malone and Laubacher, 1999).
His stated intention was that it would be created for hobbyist purposes, and would not be intended for
professional use.
Minix had been created by Andrew S Tanenbaum (Tanenbaum, 1987), in order to better assist students
that he taught at Vrije Universiteit Amsterdam in the Netherlands about operating systems. The idea to
create his own operating system came about when Tanenbaum was teaching his students how to use
AT&T’s Unix version 6. As Tanenbaum stated “The bean counters at AT&T didn’t like this: having
every computer science student in the world learn about its product was a horrible idea. Version 7 came
out with a license that said, “thou shalt not teach,” so you couldn’t teach version 7 Unix anymore”
(Severance, 2014). So he created his own operating system that was similar enough in its principles to
Unix version 7 that he was able to teach unfettered by AT&T’s draconian licensing constraints.
7
Minix had become the academic researcher’s platform of choice due its readily available source code
that could be examined and changed easily (if deemed necessary) - due to it being written in C, having
a system call interface that worked exactly like Unix version 7, and whilst it was a fully-fledged
operating system it was lightweight enough for one person to quickly absorb and comprehend (Mull
and Maginnis, 1991).
Torvalds was himself an avid Minix user. In 1991, he purchased for himself a new Intel 80386 based-
PC, but he soon realised that Minix could not take advantage of the enhanced protected mode (also
known as protected virtual address mode) of the newly released processor, so he took it upon himself
to write his own operating system kernel so that he could do so (Dettmer, 1999). His original kernel
was just a basic task-switching kernel – all it could do was display a message from each of two running
processes. Minix was used to compile the kernel and provide a file system. Torvalds managed to post a
semi-complete version of his operating system source code onto an FTP site in November 1991
(Wiegand, 1993).
The following year, Torvalds combined his work with another on-going open-source project entitled
GNU - which stood for GNU is not Unix (Casadesus-Masanell and Ghemawat, 2006). GNU, was the
brainchild of Richard Stallman (Hars and Ou, 2001), who worked at MIT (Massachusetts Institute of
Technology), and was under development in order to create an entirely free Unix-like operating system.
By 1992, the GNU project had yet to complete its own kernel (Stallman, 1998), but had completed
many other components required for an operating system, which included compilers, a command shell,
libraries, a windowing system and text editors. Torvalds combined his kernel with the readily and freely
available GNU programs to create a fully-fledged operating system (Bokhari, 1995).
Linux was made available under the GNU General Public License. This license allows the freedom to
any end user to have access to and be able to modify the software source code (as long as it is made
clear the source code has been modified), or distribute (and if so desired - charge for) copies of the
software. Additionally, the software can be used in new programs – modified or unmodified, and that
8
if that is the case, the recipient of the software is granted the same freedom as the distributor (The GNU
General Public License v3.0 – GNU Project – Free Software Foundation, 2007)
The computing landscape in the early 1990s was somewhat different to what it is today. In a May 1990
article in IEEE’s Computer magazine entitled ‘Recent Developments in Operating Systems’ (Boykin
and LoVerso, 1990) it was noted that generally operating systems of the time fell into one of two
categories – the first being referred to as mere “loaders” of programs (such as MS-DOS and DR’s
CP/M) with limited support for additional peripherals, and the second being of a more complex variety
that could offer access to manifold devices on a concurrent basis (examples include AT&T’s Unix and
Data General’s AOS/VS). However, mainly due to the rise to prominence of Ethernet networking,
commoditised CPUs and other significant hardware improvements, future operating systems would
have to address newly evolving requirements to specifically power graphical user interface based
workstations that were interconnected using local area networks (LANs).
Whilst Torvalds had begun work on his kernel, at the same time other operating systems began to appear
that could also harness the power of Intel’s 80386 processor, such as IBM’s OS/2 and Microsoft’s
Windows NT - additionally, at this point in time Unix had just become the first major ‘machine
independent’ operating system, enabling it to run on different hardware platforms. (Wilkes, 1992). All
of this evolution was being driven by the aforementioned recently evolving resource-hungry usage
scenarios like networking and graphical/multimedia applications (Cheung and Loong, 1995)
Just a few weeks before Torvald’s Usenet post, the World Wide Web was first made available to the
public on the Internet (Carbone, 2011). Undoubtedly, the advent of the Internet era would have also
contributed to the necessity for both hardware and operating system improvements. This line of
argument can be strengthened by Curwen and Whalley (2014), who wrote that changes in technology
generally move forward via a series of generations or part generations, and that these changes are
achieved either through better hardware, software, or a combination of the two. Indeed, as it has already
been demonstrated, Torvalds wrote his kernel to harness the power of his newly purchased hardware
9
that Minix was not able to do. Furthermore, West and Dedrick (2001) assert that the rise of Linux’s
prominence is as a direct result of the Internet.
Problem Statement
Almost 25 years after that initial Usenet post, the kernel created by Torvalds, which later became known
as Linux, has gone on to become the number one most used operating system kernel in the world (The
Linux Foundation, no date). Linux finds itself being used for such diverse applications as the running
of nuclear submarines (Claiborne Jr, 2001), the International Space Station (Ortega, 1999), over 1.4
billion portable devices (Vincent, 2015), as well as powering and underpinning the social media
juggernaut Facebook (Zeichick, 2008) inter alia.
Whilst all of this has shown that the Linux kernel is versatile and has many usage cases, there is one
cross section of the computing landscape that Linux has, as of the time of writing, not managed to
successfully permeate – the desktop computing space. For the purposes of this paper, the term desktop
computing is defined as traditional desktop or laptop PCs that utilise the x86 instruction set, and
therefore will exclude servers, mobile devices - such as tablets or smartphones, and games consoles.
Operating system market share data for February 2016 is presented in Table 1 for desktop operating
systems, and Table 2 for mobile operating systems. This data was provided by netmarketshare.com, a
website that collects data from the web browsers of individual unique devices that visit one of over
40,000 websites in their content network, as well as from over 430 referral sources including search
engines, enabling them to provide statistics on different web browsers being used, as well as the
operating system(s) used by those browsers (Can you explain the Net Market Share methodology for
collecting data?, 2016).
10
Operating System Total Market Share
Windows 7 52.41%
Windows 10 12.31%
Windows XP 11.34%
Windows 8.1 10.13%
Mac OS X 10.11 3.57%
Windows 8 2.56%
Mac OS X 10.10 2.27%
Linux 1.74%
Windows Vista 1.68%
Mac OS X 10.9 0.86%
Mac OS X 10.6 0.35%
Mac OS X 10.8 0.29%
Mac OS X 10.7 0.29%
Windows NT 0.10%
Mac OS X 10.5 0.06%
Mac OS X 10.4 0.02%
Windows 2000 0.01%
Windows 98 0.01%
Mac OS X (no version reported) 0.00%
Table 1 – Desktop Operating System Market Share as at February 2016 (netmarketshare.com)
11
Operating System Total Market Share
Android 59.65%
iOS 32.28%
Windows Phone 2.57%
Java ME 2.4%
Symbian 1.57%
Blackberry 1.45%
Samsung 0.05%
Kindle 0.02%
Bada 0.01%
Windows Mobile 0.00%
LG 0.00%
Table 2 – Mobile/Tablet Operating System Market Share as at February 2016
(netmarketshare.com)
In addition to the data presented in Table 1 and Table 2, w3techs.com (Usage statistics and market share
of Unix for websites, 2016) states that 36.2% of the top 10 million websites (based on rankings collated
by Alexa, a company belonging to Amazon.com), are powered using the Linux kernel.
Therefore, using those data sources as evidence, it is clear that Linux has failed to capture desktop
market share whilst it has been a proven success on mobile devices and mission-critical web servers on
the Internet. The intention of this paper is to perform an exploratory research in order to establish the
reasons for Linux’s failure to penetrate the desktop computing space. It will be demonstrated that Linux
is comparable in features and performance to the other popular desktop operating systems that it is
ranked against in Table 1, so it stands to reason that there must be other reasons for this disparity in
market share versus other market segments, which this paper will attempt to uncover.
12
The working hypothesis is that Linux has failed to achieve a sizeable portion of the desktop operating
system market because of a multitude of reasons, stated below:
 It is not preinstalled on new PCs that are sold
 There are too many Linux distributions available, which has led to fragmentation
 Different package managers are used by different distributions
 Multiple desktop GUI environment choices
 A perceived lack of user friendliness and a steep learning curve
 Deficiencies in hardware support, especially for graphics adapters
 Paucity of available software/native versions of popular applications
Research Objectives
The research objectives of this paper will be:
 Looking at the history of Linux from the evolutionary perspective of Unix and other Unix-like
operating systems (refer to Appendix A)
 Experimentation with various competing operating systems to better understand the difficulties
that might be faced to get a user up and running
 Establishing the causes for Linux on the desktop’s failure through qualitative interviews
 Understanding the reasons for Linux’s success on other non-desktop hardware platforms
 Attempting to discover if it is possible to reverse the trend, and how it might be reversed
13
Approach
In order to prove or disprove the working hypothesis, and to establish the reasons for its success on
other non-desktop platforms, Linux will be compared to other operating systems through the use of
experimentation - with installation and configuration, through the creation of a desktop base image
across each operating system. The working hypothesis will be interrogated further through qualitative
interviews with a number of IT professionals known to the researcher. Once proved or disproved, finally
an answer will be sought to understand if there is a possibility to reverse the trend, and if so, how it
might be done.
Additionally, in Appendix A, the history of Unix and Unix-like operating systems is presented to
demonstrate how Linux has evolved into what it is at the time of writing
14
Chapter 2
Literature Review
“…The time will come when diligent research over long periods will bring to light things which now
lie hidden…”
(Seneca, Natural Questions) (Ellis, 1998)
Timeline
As was discussed in the introduction, Torvalds’s first version of the Linux kernel was released online
in November 1991, so the literature review timeline began from 1991 to the present. Initially, a very
loose preliminary search was performed using Summon – the University of Middlesex’s database of
publications for the keyword ‘Linux’. As demonstrated in Figure 1, the most recent 10 years or so
provides quite a significant body of research on Linux in general to be delved into.
Figure 1 – Number of Scholarly and Peer-Review Papers on Summon, timeline based
0
1000
2000
3000
4000
5000
6000
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
15
Process (Method for collection) – sources, keywords
Two major databases were used for the literature review on the research topic. The first was Summon,
a database capable of searching many library resources at once, which is provided by the University of
Middlesex to its students. The second database that was used to uncover relevant articles was Google
Scholar. It was felt that these two databases would yield sufficient data for the literature review phase
of this paper.
Only top quality journal publications and articles, conference proceedings, as well as peer reviewed
magazines were utilised. So that a collection of pertinent keywords could be established for a more well
focused and defined search, an analysis of content was carried out against the websites of prominent
organisations that have a strong involvement in the contemporary world of Linux and its ongoing
propagation as a successful operating system. The organisations that were analysed were the Linux
Foundation, IBM, Dell, Fedora, Red Hat, Ubuntu, and DistroWatch. The initial findings from the
keyword analysis are shown in Figure 2.
Figure 2 – Keywords Established From Content Analysis
0
5
10
15
20
25
OpenSource
Costs/Free
Cloud
Servers
XWindows
Ubuntu/Canonical
Community/Communities
Mainframe
Performance
Security
OperatingSystem
Enterprise/Corporate
Desktop
Windows
Distribution
CPU/Processor
RedHat
Suse
Kernel
KDE
FreeSoftware
Workstation
Terminal/CommandLine
GNOME
XFree86
Licensing
Apache
Hardware
Architecture
GNU
POSIX
Repositiory
Unix
16
To further filter the keywords in order to bring forward results from the database searches that were
pertinent to this paper’s line of enquiry, all keywords that yielded less than five results were disregarded,
which left a shortlist of thirteen keywords. Surprisingly, ‘kernel’ came some way down the list. Some
of the remaining keywords on the shortlist such as ‘Mainframe’, ‘Cloud’ and ‘Ubuntu/Canonical’ were
removed as they were too far removed from the subject matter that this paper is concerned with. A
shortlist of ten keywords would be used to search the selected databases (in conjunction with the
associated word ‘Linux’):
 Open source
 Cost / Free
 Server
 X Windows
 Community / Communities
 Performance
 Security
 Operating System
 Enterprise / Corporate
 Desktop
Search results would only be considered if they fell under the remit of computer science, and yet that
still yielded a combined total of 74,275 publications on Summon. As this was such a broad number of
papers, that in all likelihood would be mostly irrelevant, it was therefore decided to further reduce the
number of keywords to only include costs/free, operating system, enterprise/corporate and desktop.
Whilst a significant reduction had been made, a total of 27,117 papers remained. Therefore,
combinations of the keywords were used to achieve a more manageable number of papers to be
reviewed. Eventually, a finalised total of 45 papers were collated that were ascertained to be relevant to
this research paper. These are summarised as per Table 3 and Figure 3.
17
Type of
Publication
Publication Title Number
of
articles
Journal
1 Elsevier – Journal of Computers and Security 1
2 ACM - SIGOPS Operating Systems Review 2
3 Elsevier – Journal of Information Economics and Policy 1
4 Journal of Management Science 1
5 Elsevier – Journal of Systems and Software 2
6 MIS Quarterly 1
7 Library Hi Tech 2
8 Computing in Science and Engineering 1
9 International Digital Library – Perspectives 1
10 Journal of Academic Librarianship 2
11 Elsevier – E-Commerce, Internet and Telecommunications
Security
1
12 Springer – Knowledge, Technology and Police 1
13 ACM – Transactions on Security 1
14 Journal of Corporate Accounting and Finance 1
Magazine
(Peer-
Reviewed)
1 IEEE Security and Privacy 1
2 ACM Queue 1
3 Elsevier – Network Security 1
4 IEEE Software 8
5 SSM IT Professional 1
6 The CPA Journal 1
7 Library Journal 1
8 IEEE Computer 2
9 Elsevier – Computer & Security Report 1
Conference
Proceedings
1 IEEE Proceedings 8
2 Proceedings of the Workshop on Standard Making – A
Critical research frontier for Information Systems
1
Other
1 Forrester Research 1
Table 3 – Final set of publications for literature review
18
Figure 3 – Classification of final set of publications for literature review
Review of Topics
The final set of publications that were used for the literature review were collated and briefly
summarised to better analyse their topics, content, direction, lines of inquiry / research, and how they
would fit with the research to be performed in this paper. The papers have been broken down into three
broad areas – the first covers Linux architecture (Table 4), second are papers that involve comparison
to Windows and other operating systems (Table 5) and finally publications that are focused on adoption
of Linux/other open source software or operating systems (Table 6).
Journal, 18
Magazine, 17
Conference
Proceedings, 9
Other, 1
Classification of final set of publications for literature review
19
Linux Architecture
Paper Description
Lu, et al. (2014) In depth study into Linux file system evolution and its features – demonstrating the Ext4
file system is ruggedised enough for use.
Harji, et al. (2011) Demonstrates that different kernel versions of Linux have major performance variations
between them.
Dukan, et al. (2014) An analysis of performance versus power consumption between Intel/AMD and ARM
based processors using Linux – concluding that type of processor architecture is
becoming irrelevant – an indication into the future direction of computing.
Xiao and Chen
(2015)
Comprehensive study into potential logging overhead issues when using Linux when
not using adaptive auditing.
Thiruvathukal
(2004)
Further evidence of distribution fragmentation in this paper, as well as a look at some
of Linux’s perceived weaknesses – such as hardware support and binary package
dependencies.
Radcliffe (2009) Paper that comparatively examines how access to hardware is controlled using Linux,
FreeBSD and Windows.
Shankar and Kurth
(2004)
An evaluation into security implications for open-source code, such as is used in the
Linux kernel.
Harji, et al. (2013) Discussion into the complexity and problems encountered during Linux kernel
upgrades.
Table 4 – Content overview of papers used for literature review relating to Linux’s architecture
20
Comparison to Windows / Other Operating Systems
Paper Description
Bean, et al. (2004) A paper concerned with establishing that open source operating systems and software
in general are primed to perform akin to that of their proprietary counterparts
Chaudri and Patja
(2004)
Shows that whenever possible, Microsoft has sought to perpetuate their operating
system monopoly through the use of litigation.
Macedonia (2001) Focussed on Linux’s inability to compete with Windows as the PC gamer’s platform of
choice and the reasons why.
Massey (2005) This article discusses a 2005 open source software conference, where claims were made
that 2005 would be the year that Linux would break through on the desktop.
Goth (2005) Talks about how Linux and other open source software has matured to rival commercial
software, and that how to move to open source is more important than whether or not.
Sanders (1998) Highlights how Microsoft assimilates functionality of emerging software to
aggressively dominate the software industry to the detriment of others.
Hilley (2002) Establishes that as early as 2002, governments and government agencies across the
world begin Linux adoption programs, much to Microsoft’s chagrin.
Dougherty and
Schadt (2010)
A case study that demonstrates that widely used Windows applications have Linux
based alternatives, whilst cautioning that some software may never have an alternative.
Coyle (2008) Shows where Linux lags behind versus its contemporaries, and that there are hundreds
of distributions to choose from.
Kshetri (2007) Argues that software piracy (principally of Windows) takes away potential Linux
market share on the desktop.
Dedeke (2009) Proposes the idea that Linux is not necessarily better than Windows from a vulnerability
perspective.
Tsegaye and Foss
(2004)
A comparative study into both Windows and Linux device driver implementation,
praising Windows’ better ability to work on a plug and play basis versus Linux.
Salah, et al. (2013) A review and analysis in to security concerns when deploying commoditised operating
systems.
Casadesus-
Masanell and
Ghemawat (2006)
Provides a close look at what motivates contributors to Linux and other open source
development projects, and how Linux’s availability causes competition like Microsoft
to reduce its pricing to remain competitive.
West and Dedrick
(2001)
Study into the rise of Linux – primarily focused upon the motivations of suppliers and
buyers of complimentary assets as well as how Microsoft reacted to this changing of the
landscape.
Stange (2015) Highlights that in an IT environment it is common to find a mixture of different
operating systems being used.
Table 5 – Content overview of papers used for literature review comparing Linux to
Windows/other operating systems
21
Adoption – Reasons for, Costs, Drawbacks, Risks, Benefits
Paper Description
Giera and Brown
(2004)
A comprehensive research into the costs, drawbacks and risks associated with migrating
to open source software – specifically the differences versus commercial software.
Young (1999) An article that argues that the claim that Linux systems potentially have a lower cost of
ownership across the lifecycle may be naïve despite the fact the operating system is free
of charge.
Lewis (1999) Asserts that open source software does not become mainstream unless commercialised.
Leibovitch (1999) An early case study into an all Linux enterprise – weighing up Linux’s strengths against
its barriers to its acceptance. Despite being an older paper, the same arguments appear
to hold true against contemporary literature, making it a valuable primary source.
Ven, et al. (2008) Examination of advantages and disadvantages of open source software adoption –
specific to Linux.
Gwebu and Wang
(2010)
An exploratory study of the user perceptions of open source software adopters – to see
if there are different mind-sets involved in those who decide to adopt.
Auger (2004) Discussed how older hardware can be repurposed by using Linux, by stripping away
unnecessary features and overhead, thus leading to cost savings.
Maddox and
Putnam (1999)
A paper highlighting both positives and negatives to Linux adoption, mainly from a cost
centric view.
McClaren (2000) Paper that discussed the notion that Linux is essentially an unsupported operating
system.
Delozier (2008) Another paper that discussed Linux fragmentation – many distributions and desktop
environments – however does positively propose software alternatives to commercial
applications on other platforms.
Chau and Tam
(1997)
An exploratory study into factors that impact the adoption of open source software and
systems.
Kirby (2000) More evidence of Linux distribution fragmentation, discussion into Linux supporting
various hardware platforms, and being optimal when looking to extend the useful life
of old hardware. Sole focus on cost of software is not the only utility of Linux.
Mustonen (2002) Research undertaken into the economic logic of Linux and other open source
applications.
West and Dedrick
(2001)
Conference paper that establishes the reasons for the rise of Linux, and presents research
into the adoption motivation of various organisations between 1995 and 1999.
Anand (2015) Another paper that discusses fragmentation in Linux desktop GUIs and distributions,
but also establishes positive reasons for using a Linux distribution.
Dedrick and West
(2004)
Exploratory study into the various factors influencing open source platform adoption,
and the processes used to evaluate and then implement such technologies.
Ajila and Wu
(2007)
Empirical study into factors that cause an effect on open source software development
economics, as well as understanding the steps involved in open source software
adoption.
Kshetri (2004) Comparison of macro and micro influences in decision to adopt Linux in developing
nations – asserts a lack of interoperable software is an issue requiring attention.
Dedrick and West
(2003)
Looks at the consequences of adoption of standards – from the standpoint of technology,
environmental and organisational views.
Bokhari (1995) Establishes that a high level of system administrator competency is required to support
Linux in a networked environment.
Decrem (2004) Article that looks at obstacles to Linux’s broader adoption on the desktop – several of
which are established.
Table 6 – Content overview of papers used for literature review concerned with the adoption of
Linux/other open source software
22
Conclusions
As pointed out by Stage (2015), it has become commonplace to find an amalgam of different operating
systems within an IT operation. In the past, supporting Unix-like operating systems (such as Linux), in
a networked environment, necessitated the need for high level system administrator skills and
proficiency in order to support and maintain both system and network stability (Bokhari, 1995).
Despite this, towards the latter half of the 1990s, organisations began to sense that there was value in
exploring the possible adoption of open source operating system software primarily to avoid the
constrictions that were imposed by the use of proprietary software (Chau and Tam, 1997). Perhaps
sensing this, at around the same time, Microsoft had begun to embark upon an aggressive strategy of
incorporating any well-received new features introduced by other software companies into their own
Windows operating system with the overall effect (and probable motivation) of removing most of their
competitors from the market (Sanders, 1998).
Around this time, cost perspective implications began to arise in discussions. Some held the position
that in order to succeed, Linux would have to become commercialised and become a chargeable product
(Lewis, 1999). Others began to debate that whilst Linux is free of charge, the actual total cost of
ownership makes it more expensive than Windows – in the main due to having to spend more on
software maintenance and support than one would spend on Windows – fitting quite logically into
Bokhari’s ‘administrator skills proficiency’ requirement previously mentioned (Young, 1999). This
argument is further elaborated upon by McLaren (2000) who states that whilst being free is Linux’s
biggest selling point, it is as an operating system that is essentially unsupported.
However, arguing against Young, in the same edition of the same publication, Kirby stated, that being
just concerned with cost(s) detracts from the utility of Linux, especially as it can be used to increase the
longevity of hardware beyond the traditional vendor supported lifecycles, and would therefore offer an
23
advantage against its commercially available operating system rivals (Kirby, 2000). The same benefit
was also highlighted by Auger (2004).
Around the same time (in the late 1990s), the earliest case studies of the corporate use of Linux began
to manifest. One such case study, focused on a Canadian start up, called Starnix, that adopted Linux
due its primary technical strengths of scalability, flexibility and reliability (Leibovitch, 1999). Again
the topic of support was highlighted as an impediment to the widespread adoption of Linux, although
in the case of Starnix it was not an issue because of the Unix-based background of its team that provided
the necessary complimentary skill set required to support their set-up.
At a similar juncture, papers began to appear that charted, studied and analysed the rise of Linux (West
and Dedrick, 2001 and West and Dedrick, 2001). It has already been touched upon in the introduction
that Linux’s rise to prominence is as a direct result of the Internet. The two aforementioned papers
discuss that often, new platforms (be it hardware or software) only become acceptable to IT
departments, ordinarily resistant to change, when these platforms are used to introduce new usage case
scenarios – and specifically in the case of Linux, its most common early usage cases were Internet
centric – being used for web services, firewalls, security and other such similar services.
Once more, those papers are in agreement with the sentiments already written that relate to Linux
requiring support staff of technical sophistication, the cost saving benefits through the usage of pre-
existing hardware and the need for industry giants such as IBM and HP to throw their weight behind
the commercialisation of the operating system in order to be better positioned against Microsoft, which
by 2001 had begun to happen. Dedrick and West also discuss the notion of complimentary assets – i.e.
so that in order for Linux to gain traction, those industry giants must provide a complimentary basket
of both hardware and software, which would theoretically in turn encourage more widespread adoption
of both, in a hand-in-hand fashion (also discussed by Decrem, 2004). They also warned against the
concept of “forking” – essentially the lack of the adoption of a common standard, becoming an
impediment to Linux adoption.
24
This concept of fragmentation (or as Dedrick and West put it “forking”) is in all likelihood one of the
central contributors to the complication and confusion of Linux adoption. Many papers have highlighted
the fact that there are a myriad of available Linux distributions, and due to the availability of countless
flavours of Linux, it makes organisations more loath to adopt it (Kirby, 2000) (Anand, 2015) (Delozier,
2008) (Coyle, 2008) (Thiruvathukal, 2004) (Decrem, 2004).
Subsequent research was carried out in 2003 and 2004 aimed at investigating the reasons that might
influence the adoption of open source software (Dedrick and West, 2003) (Dedrick and West, 2004).
Their research ascertained that the choice of server software did not affect how the general employee
populace viewed their computing experience – one interview respondent said “(the users) don’t know,
(and) don’t care” – meaning that so long as the underlying platform is not obvious it has little effect to
the end user. Once more the need for complimentary Linux skills was highlighted as an obstacle to
adoption. The most prominent issue was the potential inability to run third party applications on Linux
(also corroborated by Kshetri, 2004 and Decrem, 2004). However, several advantages were cited
namely the reduction of software costs, and the ability to repurpose otherwise obsolete hardware – all
positives already discussed. Although, such adoption decisions are said to be made on an infrequent
basis, probably due to the aforementioned resistance to change.
Some additional adoption factors were uncovered in the same two papers (Dedrick and West, 2003)
(Dedrick and West, 2004) which were the topics of ‘slack’ and ‘innovation’. With innovation, the
inference being that following a path of innovation leads to the earlier adoption of new technology, and
that such early adoption is a direct result of the strategy laid out by the business, and how IT is aligned
to it. So, if IT is of central strategic importance to a business, it will lead to earlier adoption of
technology such as Linux.
Looking more closely at the concept of slack, for an organisation that has additional IT department
human resources capacity but limited financial spending power, it begins to make more sense to use the
25
human resource slack to save money by using a free operating system, because this additional human
capacity allows the time and effort for experimentation with new technology (named by Dedrick and
West as “trialability”) and to learn and therefore fill in the skillset gap, making it no longer an obstacle
to possible deployment.
During the investigation in to the relevant literature, it also became apparent that a number of papers
were concerned with comparing Linux to its contemporaries, in the main the comparisons were with
Microsoft’s Windows operating system. One several strengths of Windows on the desktop is its
prevalence for playing computer games. Linux has overall failed to dent the computer games market –
primarily due to the inability to support Microsoft’s DirectX graphics API and audio driver issues
(Macedonia, 2001). At the time of writing there is still no native DirectX support on Linux.
Not only businesses and organisations were investigating potential systems migrations to the Linux
platform - governments worldwide began feasibility studies with the serious intent migrating away from
proprietary platforms. These included both the German and United States governments, despite
Microsoft’s best attempts to propagate the notion that open source operating systems were inherently
insecure compared to their own (Hilley, 2002).
This movement began to gain further momentum as pointed out by Bean et al, (2004) that major
computer industry players like Hewlett Packard and IBM were heavily marketing their Linux based
hardware – with IBM even using their own employees as field testers for Linux (on the desktop) to
ascertain its impact on worker productivity. Not taking this lightly, Microsoft began an aggressive
campaign of litigation in order to maintain the status quo of their monopoly (Chaudri and Patja, 2004).
However, there were still several advantages to using Windows over Linux. One of those advantages
related to hardware support (Tsegaye and Foss, 2004). They stated that ideally the design of device
drivers should reduce the necessity for end user interaction in order to allow the full functionality of the
device in question. Windows handles this rather better than Linux, especially when it comes to plug and
26
play operability. For an end user on a desktop, this kind of ease of use is, to say the least, rather
important.
The year 2005 was talked about as the year that Linux would finally break through into mainstream
desktop use (Massey, 2005). The ecosystem of Linux had matured to a point that it could now be
considered as being on par with its commercial rivals, with the question of whether one should move to
open source software evolving in to how one would make the leap (Goth, 2005).
These developments, and the changing attitudes towards Linux and open source software in general,
led to Microsoft reducing the pricing of its software because of the availability of Linux, in order to
remain competitive – as Linux could just be downloaded free of charge (Casadesus-Masanell and
Ghemawat, 2006). Casadesus-Masanell and Ghemawat also proposed the idea that software piracy of
Windows has a detrimental impact on the installed base of Linux (a sentiment also echoed by Kshetri,
2007).
Following on, further comparisons were made between Linux and Windows in terms of security
vulnerabilities. Dedeke (2009) wrote that whilst Linux has an overall perception of being more secure
and therefore less vulnerable compared to Windows, his research that analysed both Red Hat Linux and
Windows between 1997 and 2005 indicated that Red Hat had more reported vulnerabilities during that
time span compared to Windows, and that it was a fallacy to assume Windows was inherently insecure
compared to Linux. Wheras Salah et al (2013) warn that overall, most operating systems have flaws in
terms of security.
Dougherty and Schadt (2010) referred to the availability of applications on Linux (such as OpenOffice,
Rhythmbox and Firefox) whose utility was equivalent to similar applications available on Windows
(such as Microsoft Office, iTunes and Internet Explorer). They further elaborated on this, informing
that whilst there were like for like applications for many usage case scenarios, making the choice to use
27
Linux did exclude the ability to use certain applications that may never be ported over to or made for
Linux, and this consideration should not be taken lightly.
Architecture-wise, Linux also has some hurdles to overcome. One such major concern centres around
kernel upgrades. Knowing when to upgrade kernel versions, (and to which version), is a serious concern
(Harji et al, 2011) (Harji et al, 2013). There are significant performance variances between different
kernel versions, and without referring either to benchmarks that can be found online, or through testing
on the job, it is difficult to know at what point to upgrade or not upgrade the kernel. Further to that, in
the researcher’s own experience, a kernel upgrade can break graphics driver dependencies, rendering
the GUI portion of Linux unusable, as the graphics drivers are compiled using whatever version of the
kernel was available at the time, and would need to be recompiled using the newer kernel.
Figure 4 - Output – a simple definition, a conceptual model (Dimensions)
28
Literature Gap
What has been ascertained from the review of the literature is that from 2009 it has become much harder
to find or uncover new research into Linux adoption on the desktop. This could be due to changing
patterns of computing as discussed by Dukan et al (2014) when it is noted that traditional PC based
desktop computing is becoming less relevant in an era of portability driven on by low power
consumption processors used in mobile/tablet devices, low power sensor networks and the lightweight
operating systems based on the Linux kernel that power them – or the reasons established earlier in
2009 are still the same. It is also possible that Linux (driven on by competing juggernauts such as HP,
Dell and IBM) has focused on its core success areas such as server centric application use.
As no further research has been located after this subsequent gap, this paper intends to cover this period.
Furthermore, these studies are focused primarily on server side technology, of which Linux has gained
widespread acceptance at the time of writing (see introduction for statistics).
It has been demonstrated that a clear pattern has emerged during the ‘adoption’ portion of the literature
review that the decision to adopt is generally influenced by the weighing up of the shortage of skills
pitted against a saving of costs on software, plus the ability to reuse older hardware (Maddox and
Puttnam, 1999) (Ven, et al, 2008) (Ajila and Wu, 2007) (Giera and Brown, 2004) (Decrem, 2004).
As has been demonstrated earlier, whilst Linux does have many comparatively similar applications
versus its Windows nemesis, there is still a lack of applications overall. Issues with device drivers and
hardware support also appear to be a relevant issue when considering which of the two to choose from.
When it comes to security, there is some conjecture as to which is the more secure operating system –
but with both Windows and Linux, it would depend on the attack footprint of any given specific system,
making it more difficult to argue either way, although data provided by Dedeke (2009) leans towards
Linux being the more insecure operating system.
29
It is therefore apt to revisit the research topic again due to Linux’s widespread adoption on other
platforms, such as servers and portable devices. It stands to reason that a lack of skills (for support or
otherwise) has not hindered its advance on other platforms, so there must be other substantive reasons
that have influenced Linux’s small segment of captured desktop market share when compared to
Linux’s other already mentioned successful platform penetration.
30
Chapter 3
An Experimental Comparison of Linux and Windows
“…When we design and architect a server, we don't design it for Windows or Linux, we design
it for both. We don't really care, as long as we're selling the one the customer wants. If a server
goes down the production line, it doesn't really know what OS it has on it…”
(Michael Dell, Interview with PC Magazine 3 February 2004 (Miller, 2004)
One of the key contentions of this paper is that Linux, from a functionality and performance perspective,
is comparable to Windows, and therefore should be discounted as a reason for its lack of adoption.
Whilst it is apparent that they do not share the same lineage, it is assumed that the performance of Linux
is not a reason behind its lack of desktop market share. According to research performed by Dederick
and West (2003 and 2004) one respondent said “(users) don’t know, (and) don’t care (about the
operating system in use)” as long as a user is able to adequately perform the tasks they want to perform.
In order to prove or disprove the aforementioned assumption, an experiment was undertaken between
14 to 17 July 2016, to compare Windows 10 Professional, and Linux Fedora Workstation 24. Fedora
was specifically chosen as this is the Linux distribution of choice used by Linus Torvalds (Torvalds,
2014). Various measurement metrics were defined, and are elaborated upon in the experimental
procedure section that follows. Originally, FreeBSD had also been considered as an operating system
candidate for the experiment, but as a GUI has to be separately installed, it was decided to withdraw
FreeBSD due to time constraints, as its withdrawal would not have a material impact on the research
focus of this paper.
31
Experimental Procedure
A Lenovo X201 laptop (manufactured in 2010) was chosen. As identified in the literature review
section, Auger (2004), Kshetri (2004), Decrem (2004) and Kirby (2000) had written that the longevity
of older hardware can be extended if the hardware was repurposed by having Linux installed as its
operating system. Therefore, the experiments would also be a logical extension to the existing body of
research work. The hardware used was as follows:
 Processor – Intel Core i5 M540 2.53Ghz
 8GB RAM (DDR3-1066Mhz)
 SanDisk Ultra Plus 256GB SSD Hard Drive
 9 cell battery
 12.1” WXGA LED Display
 On-board Intel HD Graphics Adapter
 External LG GP30NB30 Slim Portable DVD Drive
 SanDisk Cruzer Blade 8GB USB Drive
 Sony Xperia Z3+ Smartphone (for timer measurements)
The experiment was broken down into five broad areas:
 Installation
 Start-up / Shutdown
 I/O intensive operations
 Processor intensive operations
 Power management
32
Stage 1 - Installation
Prior to each installation, the SanDisk Ultra Plus 256GB SSD Hard Drive used for the experiment had
all partitions deleted, so that it would present itself to the operating system installer as a new empty
drive. During the installation, default automatic drive partitioning was selected on both Windows 10
and Fedora 24. All default installation options were chosen, and one unique user entitled “unitest” was
created, without a password.
The number of unique interactions – such as pointing device clicks, or keyboard entries used to define
a username, were noted, as well as the number of reboots required to arrive at a working desktop, and
the time taken to complete the installation.
Both operating systems were installed using an external LG DVD drive as the laptop did not have an
on-board optical drive. Once the installation had been completed, both operating systems were updated
to the most current patch levels available from their respective providers. The time to install updates
was not measured, as the media used for Windows 10 was issued in late 2015, whereas the Fedora 24
media was downloaded on the first day of the experiment (14 July 2016), and would not therefore
present data that would be comparable.
Stage 2 – Start Up / Shutdown
In order to test both start up and shutdown performance, several timing measurements were recorded:
 The time taken to start up the laptop from the powered off state to the user desktop
 The time taken to completely shut down the laptop from the user desktop to the powered off
state
 The time taken to hibernate the laptop from the user desktop to sleep mode
33
 The time taken to wake the laptop from sleep mode back to the user desktop
Stage 3 – I/O Intensive Operations
Tests were undertaken to measure I/O performance of the two operating systems using two types of
files:
 Small files – 106 files of varying file types and sizes - total 326MB
 Large file – 1 Matroska Multimedia Container file (.MKV) containing a 1080p Blu Ray rip of
a film – total 3.88GB
In both cases, the small files, and the large file were subjected to three file move operations, and the
time taken to do so on each operating system was recorded:
 Hard drive to hard drive
 Hard drive to USB drive
 USB drive to hard drive
Stage 4 – Processor Intensive Operations
Three sets of processor intensive tests were carried out. In the first test, both operating systems had the
64bit version of Handbrake Open Source Video Transcoder installed and the Matroska Multimedia
Container file from the large file experiment in stage 3 was converted from MKV format to MPEG-4
format (using the default Normal setting) and the duration taken to convert was recorded.
34
Secondly, Geekbench processor benchmarking software was installed (only the 32-bit version, as a
license must be purchased to use the 64-bit version), and Geekbench benchmark scores were calculated
by the software and noted down.
Finally, WinRAR (for Windows) and RAR (for Linux) – (both 64 bit versions) were used to compress
the Matroska Multimedia Container file using the highest level of compression possible (setting entitled
Best).
Stage 5 – Power Management
To measure the effectiveness of the power management of both operating systems, the MPEG-4 video
file created during stage 4, was played on a consecutive loop using VLC Media Player, from a fully
charged battery state, until the 9 cell battery was completely discharged and the operating system
initiated a shutdown, and reached that state.
Presentation of Results
Windows 10 Professional
x64, Build 10586.494,
Version 1511
Linux Fedora 24 Workstation
x64, Kernel 4.6.3-300.fc24
Stage 1 – Installation
Installation time 24 minutes, 41 seconds 20 minutes, 39 seconds
Number of clicks/interactions 16 15
Reboots required 3 1
Stage 2 – Start Up / Shutdown
Start Up Time 18.35 seconds 21.10 seconds
35
Shut Down Time 9.59 seconds 6.22 seconds
Hibernate Time 3.66 seconds 2.09 seconds
Wake from Sleep Time 2.17 seconds 2.09 seconds
Stage 3 – I/O Intensive Operations
Small files – Hard drive to hard drive 7.0 seconds 3.1 seconds
Small files – Hard drive to USB drive 56.8 seconds 32.9 seconds
Small files – USB drive to hard drive 56.5 seconds 16.4 seconds
Large file – Hard drive to hard drive 28.50 seconds 26.00 seconds
Large file – Hard drive to USB drive 10 minutes 2.3 seconds 8 minutes 52.1 seconds
Large file – USB drive to hard drive 3 minutes 13.1 seconds 2 minutes 44.7 seconds
Stage 4 – Processor Intensive Operations
Geekbench 32 bit single core benchmark 1966 2028
Geekbench 32 bit multi core benchmark 4043 4068
WinRAR/RAR 5.4 x64 compress MKV file
on maximum compression setting
8 minutes 56 seconds 9 minutes 50 seconds
Handbrake conversion of MKV file using
normal setting
1 hour 41 minutes 0 seconds 1 hour 48 minutes 19 seconds
Stage 5 – Power Management
Playback time of MPEG-4 file until battery
discharged from full
4 hours 40 minutes 45
seconds
3 hours 49 minutes 58 seconds
Table 7 – Results of the 5 experimental stages
36
Discussion of Results
The installation of Fedora completed just over 4 minutes faster than Windows. It may have also been
possible to install Fedora in less time, as the install media booted first to a live desktop, and then
provided the option to install the operating system. During the initial boot from the optical media, an
option to directly enter the installation program was presented, but the keypress to initiate it did not
register and the live desktop proceeded to be booted. A probable reason for the quicker install for Fedora
is that it is a distribution stripped of unnecessary software (although it included the Libre Office
productivity suite, and several other potentially useful applications in the default installation
parameters).
Aside from the time taken to start up, Fedora was quicker to shut down, hibernate and wake from sleep.
One reason for the slower start up time was that despite the unitest account being configured without a
password, on Windows 10 the default behaviour is to directly boot to the desktop without further
interaction, whereas Fedora requires the username to be clicked/selected from the login page before the
GNOME desktop starts up – however this does not fully account for, or explain, the (almost) 3 seconds
disparity between the two.
Fedora performed significantly better than Windows did on all six of the I/O intensive tests carried out.
Fedora uses the EXT4 file system versus Microsoft’s NTFS. The performance results are corroborated
by research undertaken by Safee and Voknesh (no date) who stated that generally file operations of a
sequential nature perform more poorly on Windows compared to Linux.
During stage 4 (processor intensive operations), Windows performed much better than Fedora in both
tests undertaken by the researcher (RAR and Handbrake conversion – both using x64 binaries).
Interestingly, Geekbench (albeit benchmarked on 32 bit operations due to licensing restrictions)
performed better on Fedora than Windows. It is entirely possible that Fedora has been optimised to
perform better on benchmarking software – not an entirely unheard of phenomenon (Cai et al, 1998),
37
or perhaps it performs better using 32-bit processor operations – and if that is the case, at the time of
writing, most new desktops are shipping with 64 bit operating systems and applications and therefore
should be optimised for the same. Whatever the cause for the Geekbench results, the real world tests
measured during the experiment show that Windows was far better in this regard.
The final phase, designed to test power management, yielded a startling disparity. Windows, whilst
playing the same video file, using the same media player (VLC), lasted just over 50 minutes longer than
Fedora. In both cases, neither operating system used any third-party drivers to optimise power settings
or consumption, therefore out of the box Windows was demonstrated to be better than Fedora in this
regard.
As stated at the start of this section, part of the working assumption is that the performance of Linux
should not be a reason for its lack of desktop market share. Based on the results, it can be argued that
Fedora performed better than Windows in some cases (I/O, Start up/ shutdown) as well as being argued
that it is deficient versus Windows in other cases (Processor intensive operations and power
management). Taking a balanced approach between the two, the research indicates that the operating
systems were overall comparable (albeit depending on the usage case scenario), thus proving to a
satisfactory extent that Linux is similar to Windows from a performance angle, as per one of the tenets
the working hypothesis.
38
Chapter 4
Interviews with IT Professionals
“… A research method is a strategy of enquiry which moves from the underlying philosophical
assumption to the research design and data collection…”
(Myers and Avison, 2002)
In the previous chapter, it was established through experimentation that Linux is comparable to
Windows overall, from a functionality and performance perspective, and therefore (lack of)
functionality and/or performance can be discounted as a reason for its lack of adoption. Further research
was therefore required, in order to ascertain and establish what the reasons are for the lack of Linux’s
penetration in the desktop operating system market space. Therefore, qualitative interviews were
undertaken with 5 IT professionals, with a cumulative experience of 96 years, working in high pressure
business environments tasked with the responsibility of evaluating, purchasing, maintaining and
monitoring thousands of desktops and servers between them over the course of their careers.
In the problem statement earlier in this paper, the working hypothesis states several reasons that
postulates why Linux has not gained traction in the desktop space. Those tenets of the working
hypothesis are restated again below for the benefit of the reader:
 It (Linux) is not preinstalled on new PCs that are sold
 There are too many Linux distributions available, which has led to fragmentation
 Different package managers are used by different distributions
 Multiple desktop GUI environment choices
 A perceived lack of user friendliness and a steep learning curve
39
 Deficiencies in hardware support, especially for graphics adapters
 Paucity of available software/native versions of popular applications
In order to prove or disprove the above statements, the aforementioned information technology
specialists were selected and interviewed because of their exposure over many years to a variety of
operating systems, the fact that they have and are working in diverse industries, based in different
geographical areas, and were likely to understand technical complexities and challenges that a layman
may not. It was believed that such candidates would know much better the reasons for Linux’s failure
on the desktop than a layman.
For the purposes of this qualitative research, the interviews would be based upon the principle of
‘phenomenology’ (Husserl, 1970). Phenomenology is a method which encourages a respondent to
provide information that is based upon his/her subjective perception of a particular situation. Questions
are asked with the specific intent that the respondent will provide descriptive responses to the questions
posed to them, devoid of the motivations (or assumptions) of the interviewer – thus allowing for insights
into the behaviour, motivations and actions of the interview subject that are not influenced by the
researcher.
Several potential interview candidates that were approached had requested that the questions to be posed
would be provided in advance. The researcher made every effort to avoid providing the questions, so
that pre-preparation would be avoided, with the specific intention that the research question would not
be revealed – as having advance sight of the questions could allow the subject to extrapolate the
motivations, assumptions and actions of the interviewer and therefore rendering the ‘phenomenology’
method of interviewing null and void.
It was intended that the interviews would provide data that either supports, or does not support the
various tenets of the working hypothesis restated above, and that qualitative interviews would best
40
provide the required insight to answer the research question put forward, in comparison to quantitative
research methods that could have alternatively been undertaken.
The overall question framework for the qualitative interviews was created based upon the working
hypothesis, and other points of interest that were raised during the literature review phase of this paper.
In total, up to 29 open ended questions (refer to Table 8) were to be put to the interviewee, in order to
ascertain as much data/information as possible from the interview. However, the set of questions were
seen more as a guideline framework, and would not (nor in fact could not) be rigidly followed as certain
answers elicited during the interview may (and in fact did) inform questions that would have been asked
later on within the question structure.
The overall structure of the questions started with a more generalised line of enquiry, such as
establishing the respondent’s career history, and the general changes to operating systems that they have
observed over many years. The reasoning behind this was to engage the subject in conversation, opening
up about themselves, whilst actually narrowing down the scope of enquiry with each subsequent
question to specific areas of interest.
Questions Rationale
1. How old are you?
2. Male or Female?
Ascertain demographical information of the
interviewee.
3. How long have you worked in IT for?
4. Can you describe your career from its start to now?
5. Please discuss the technological changes that have
occurred during your working career thus far?
Gain an understanding / general overview of the
information technology specialist’s employment
background and history, and how information
technology has changed during the course of their
career.
6. Going through your career, can you discuss the
operating systems you have used in a personal
capacity, that your employer(s) have used, and how
that may have changed over the years.
7. What about on portable devices? Please discuss your
experience over the years with those devices, and
Learn about the background and opinions of the IT
specialist’s experience with various operating systems
including portable devices, both at work and
personally.
Also ascertain if they are aware that these devices are
41
how they have changed from an operating system
standpoint.
8. If answer to 7 does not elicit from interviewee that
Android uses Linux or iOS uses BSD inform
interviewee and ask their opinion on that.
primarily using BSD Unix or Linux to run.
9. Desktop PC sales are supposedly on the wane,
discuss, and are they relevant anymore – and why?
10. During your career, have you ever installed an
operating system on a desktop or laptop, and if so
what was it? If not Linux based, why not?
Diving deeper into desktop related topics. Could lack
of penetration be down to the desktop being less
relevant in an era of mobile devices?
Also understand operating system installations
undertaken by the respondent
11. What has been your exposure to Unix and Linux?
12. What do you think (or know) about Linux in general?
Start narrowing the questioning down to Linux
specifically, initially from an open ended standpoint.
13. Linux distributions, which ones are you aware of?
14. Are you aware there are currently 815 unique
distributions? What do you think about that?
15. Discuss your experiences with the Linux
distributions you are aware of.
16. If the person has in depth experience, ask about
preferred GUIs
17. If the person has in depth experience, talk about
package management for different distributions.
Focus on distribution related topics (as well as their
respective GUIs and package managers if possible).
18. Is Linux easy to use? Why?
19. In some circles Linux is viewed as difficult to use
and needs substantial training time and effort to be
invested. What do you think about that statement?
Opinions on Linux’s ease of use.
20. Why do you think Linux is rarely preinstalled on a
new desktop or laptop?
Try and understand the view of the interviewee about
why Linux is not preinstalled by manufacturers.
21. In your opinion and experience, discuss hardware
support with Linux
22. Do you think Linux performs well on obsolete
hardware? If yes, do you use it on obsolete
hardware?
Is the support of hardware (like graphics adapters) an
impediment to adoption in the minds of the respondent?
Do they believe it works well on old hardware?
Ascertain if software piracy is a reason for lack of
adoption.
42
23. If not used on obsolete hardware why not? If because
using Windows, try and elicit if using pirated
version.
24. If 23 doesn’t answer that – ask if they think that
software piracy has an effect on the user base of
Linux on the desktop
25. Do you think it is possible to do everything on a
Linux desktop that one can do on a Windows
desktop? Why?
26. If lack of applications is not cited, ask what the
respondent feels about availability of applications on
Linux versus other platforms.
27. With the prevalence of cloud and webapps would
this no longer be an issue (if believe lack of apps is
an issue)?
Test the understanding of applications on the Linux
desktop, and see if its viewed as a reason for lack of
adoption.
28. Do users care what operating system runs on their
desktop or laptop? Why?
29. If you could setup a network of workstations from
scratch with a limited budget would you consider
Linux? Why?
See what the general opinion is based on their
perceptions of their users.
As well as see if Linux would be used to save costs on
software licensing, or there is just a bias in general
against using it.
Table 8 - Qualitative Interviews - Definitions and Measurements
Interview Procedure
The interviews were conducted between 26 July 2016 and 1 August 2016 and were recorded if future
inspection was required to interrogate the veracity of the research undertaken. Additionally, transcripts
of the interviews performed were written up and they form Appendix B of this paper.
43
The respondents were advised that their name and name of any employer (both past and present) would
not be published, to encourage openness and to build trust between the interviewer and the interviewee,
unless if they expressly requested that is was to be published. Furthermore, those interviewed were
advised beforehand that the interview would be regarding their knowledge of operating systems – rather
than specifically on Linux, to elicit as much data as possible, even if some was not relevant to the
purpose of this paper, and to avoid pre-preparation on their part.
Interestingly, all of those interviewed waived their right to anonymity and were happy for their real
names, as well as names of organisations (if provided) to be published. The respondents were also
advised that they would be offered a copy of the completed thesis paper, once submitted to the
university, to ensure that the process on the part of the interviewer was transparent and that their answers
were published exactly as they had answered them – and also because once the end of the questions had
been reached, the interview subjects all wanted to know the result(s) of the research.
Discussion of Results
First of all, only one respondent was able to authoritatively answer question 16 about the different GUI
options available on Linux. Secondly, question 17, regarding different package managers was only
asked in one interview, as it was felt that it would take something away from the flow of the
conversation, as well as that the respondents appeared unlikely to be able to answer the question. The
researcher felt that by not asking, it would not detract from the interviews, as questions were still asked
about distributions and software in general.
As a result of the inability to have full answers for questions 16 and 17, two parts of the working
hypothesis were not able to be proved or disproved, and for the balance of this paper would be dropped.
44
Those two parts to be dropped were:
 Different package managers are used by different distributions
 Multiple desktop GUI environment choices
Overall, each of the five respondents noted a similar path of progression with their experiences with
operating systems. This path generally followed a DOS -> Windows 3.1 (or 3.11) -> Windows 95 and
so on and so forth pattern.
“…we were using this traditional operating system called MS-DOS 3.0 and then the evolution of the
graphical applications with Windows 3.1 was an amazing thing in front of us. And then say going to
the development of this OS by Microsoft of Windows 95, 98, and the other things. It made a
revolutionary change…”
“…Operating systems – mainly Win systems all the way from Win something and then Win NT and 95
and above…”
“…So I started off personally using DOS, it used to be DR-DOS, then MS-DOS, so then you had
Windows 95, as its own operating system. Then I’ve used Windows NT, XP, Windows 2000…”
All of the respondents have had exposure to Linux, but to varying degrees. In two cases, the first
exposure came about from free CD media provided on the cover of computer magazines, which at the
time (the mid to late 1990s) when there was less widespread Internet penetration, appeared to be vital
enabler in the spread of knowledge and information to IT literate (or those that wanted to become IT
literature) individuals.
“..We used to get CDs with magazines and the CDs used to contain a lot of software. So it was through
that that I came to know about Red Hat and Suse…”
45
“…they were popular (computer magazines) at the time before the internet and they used to come with
a CD stuck to the front with some software for you try … And that was way to find out new things and
try out new things and on one edition there was a full version of Linux to install…”
Most of the respondents agreed that the desktop as a platform was gradually becoming less relevant in
an age of portability. It was agreed that in certain cases where large screens and other high end hardware
was required for very specific tasks, there would still be a place though for such hardware. The
respondents almost overwhelmingly pointed towards the paradigm shift towards mobility – an area
which all respondents were aware is dominated by Android (Linux kernel) and iOS (BSD Unix and
Mach kernel).
“…the desktop environment is slowly getting phased out and it is getting into a different type of working
environment…”
“..some of the users require large amounts of storage, where expansion is required, additional
expansion – like those who have high end graphics requirements…the desktop will not phase out from
the market, or for the end user completely … The difference is the demand will not be the same as
before…”
“…in another 5 years the complete, complete, computing platform will be changed with this portable
equipment…”
“…we are using more portable devices like laptops, tablets, phablets and since the UI is more web
based the need for desktop PCs as such is not really there. Especially desktop PCs are not really needed
when we do not have a need for severe client resources like the old systems used to…”
46
“…there still is a place for that (the desktop) – one, the computing power and two, the form factor -
sometimes you do need to sit at a desk, have a full sized keyboard, a full sized monitor and have a mouse
for input, the ability to use those peripherals to do your job…”
“…I think desktops are becoming less relevant now…so as we going into this mobile era especially,
portable devices, laptops, there’s a bit of a market but we can see things declining there. Desktops are
losing their market share for sure. I mean people want mobility. There may be specific functions, maybe
something high end workstations where you are doing some sort of engineering or drawing – things
like that, which require a lot more resources and necessitate desktops. But I think people are shifting
more towards just getting their work done…”
During the literature review, there was some evidence of software piracy being responsible for Linux’s
reduced desktop market share (Casadesus-Masanell and Ghemawat,, 2006, and Kshetri, 2007).
However, this was contradicted by the interviews undertaken, where this notion was dismissed as being
a major contributing factor. It should be pointed out that the literature referred to dates back almost 10
years from when this research has been undertaken, and may have been more relevant previously.
“…I don’t think it’s down to piracy. I think it’s down to what people are already familiar with and what
they have…”
“…under those circumstances it shouldn’t really be too much of a piracy issue for Linux…”
“…nowadays nobody is really using a pirated operating system…It’s my opinion, nobody will be
looking for any pirated software…”
What was a point of interest is that it was a generally accepted opinion that an end user does not care
what operating system is running on their device – whether it is a desktop, laptop, or a portable device.
This corroborates earlier research identified during the literature review (Dedrick and West, 2003, and
47
Dedrick and West, 2004). It was clearly established that an end user has a set of generally repetitive
tasks to undertake (or perhaps overall repetitive patterns of use) – be it for work, or for leisure – and
they expect to be able to accomplish those tasks – irrespective of the underlying operating system.
“…No, they just want to be familiar, they just want to get their job done, they just want to be able to do
it…”
“…Say, for performing the task – how much, how quickly can they do it, how easily can they do it. That
is a factor which the user will consider when choosing the OS (to use)…”
“…From a personal use (perspective) I don’t think so. From a business use I think whatever makes
them more comfortable as long as they can deliver…”
“…unless if I have specified it to them (the users), they would not know what is the operating system
(in use)...nobody is getting into the operating system core capabilities. Their experience on functionality
is based on core application level experience not on the operating system…”
Now, referring to the previous paragraph, one of the tenets of the working hypothesis was that the lack
of available software/native versions of popular applications is a contributing factor when answering
the research question. What became apparent from the discussions is that there is one key suite of
applications that is missing from Linux distributions – which is Microsoft Office. Whilst there are
alternatives available, it would appear reading between the lines that this makes no difference to the
perception of users.
“…they expect to find a piece of software just there available, such as Word, Excel, their Outlook…”
“…maybe that is because having used Windows systems for so long, but I feel much more comfortable
working with Windows Excel than Google Excel (Sheets)…”
48
“…one of the challenges is that most of the applications, say around 75% of the applications are
available or programmed for Windows…Applications availability is very poor under Linux, the Linux
platform…”
“…So they start off talking about ok what are the common applications we use, so say Word, Excel…
Can I use Word and Excel? Some of the features are not as exactly the same as apples for apples and I
think that’s the issue…I think the question they would ask if about applications – Can I do this? Can I
do that? Can I use Word and Excel? For me it’s a bit about the compatibility of the other applications
– Windows has that edge over the others...”
“…It depends upon the compatibility. Some of the applications, the compatibility…”
“…there are a lot of functions that are missing from that, that are only available in the Windows version
right…But from a user perspective, I think the applications are quite limited. So you have your own set
of applications, I think Linux has it, but OS X has its own version of a word processor, or a spreadsheet,
things like that – but functionality wise it’s not up to the mark as some of the Windows Office suite
applications are…”
During the interviews, the respondents were in overall agreement that the reason why Linux is not
preinstalled often on new PCs sold is primarily due to familiarity (or rather lack of in Linux’s case) to
the end user. This feeds into the operating systems experience that was established in most of the
interviews, where the respondents themselves followed a path of progression, discussed earlier in this
section. This can therefore be argued to be the case for the general user populace, who are exposed to
Windows from an early age, usually at school, so they would naturally gravitate towards what they
already know. In addition to this, Microsoft’s deals with OEM manufacturers to preload their operating
system is also a contributing factor, but not the substantive reason.
49
“…There’s few people who are familiar with it, so the level or knowledge in your typical family,
whereas they’d know Windows already…”
“…People are taught Windows at school…”
“…I would put it under something called an oligopoly, which is something practiced by Microsoft. So,
once they have captured the market, 90 plus, 95% plus of the market, then they can pretty much dictate
or collude with various manufactures to ensure that their systems get on board…”
“…well the consumer market has not accepted Linux, mass consumers have not adapted to the Linux
environment. Every user has adopted the Windows environment. If anyone buys a laptop, anyone would
go for only a Windows operating system. Even with consumers, any business that is selling in the market
they would rather sell the Windows environment than a Linux preinstalled piece of hardware, unless of
course it’s a mobile…”
“…I don’t know if there’s some sort of OEM contract in place or something like that, but one guess I
would have to make would come down to user preference – what’s the most popular OS that they are
used to…for the masses if you look at it everyone’s most familiar with or aware of is Windows – which
I think is what sells. So someone’s going out there to buy a laptop and they come with Linux installed I
don’t think they have such a big market share…”
When those interviewed were questioned about their knowledge of Linux distributions, the answers
tended to circle upon Red Hat, Suse and Ubuntu – this tallies well with the keywords established from
the content analysis undertaken during the literature review section (refer to Figure 2). None of the
respondents were aware of the vast number of distributions available, so the fragmentation of
distributions can be disregarded as an influencing factor, as those interviewed were not aware of them,
so could therefore not be influenced by what they are not aware of.
50
“…If someone told me that there was 200 I’d have thought well possibly, but 800 sounds like quite a
lot…”
“…Because it is an open platform, anybody will be able to use their ideas and develop their own OS.
This is actually adding more value and power to this particular OS because the contribution from
multiple people and they have the liberty to take their own ideas into this OS – and that’s the reason
why so many versions have been developed…”
“…Wow. It’s almost like, it feels like a fragmented market…”
“…but I am surprised to see that 800 variants or different flavours (exist)…”
“…I knew there were a lot, but I didn’t expect it to be that many. Definitely over 100 but that’s amazing.
I think it’s a good and bad thing…but in terms of a regular user I think they would find it difficult if
there isn’t a common standard across these distributions. To me it’s a good and bad thing. Each person
has a flavour for what they want, or want to try. So they have many options, but in terms of
standardisation and people having to keep track of different commands and different ways to do things,
that could be a downside to it…”
From a hardware support standpoint, those interviewed generally believed that hardware support with
Linux was adequate. The general consensus is that Linux runs well on hardware with differing levels
of computational power and/or age. However, particular reference was made twice to graphics adapter
support, which was part of the working hypothesis. Therefore, this tenet has been proved to be correct,
although it is not considered to be the major contributing factor to the lack of market share, it is just
part of the reason.
“…Linux does have its deficiencies on the desktop - I’d say mainly down to graphics…”
51
“…compatibility is one of the challenges we face both with Linux and Solaris. Some of the devices are
not recognised, and the drivers are not available and the functionality is restricted…so that way there
are huge challenges when it is coming to this OS…”
“…Yes, the hardware was problematic. The drivers especially. You had to look for these compatible
drivers. It wasn’t plug and play, so everything at that time I had to try and download several drivers to
find one that would work. It was problematic… I think mostly it was printers, network cards, I think
graphic cards…”
In the main, it was also established that there is a substantial learning curve when adopting Linux, as
well as user-friendliness concerns. However, as several of those interviewed pointed out, this is most
likely as a result of many years of user exposure to Windows. This learning curve was also cited by one
respondent when referring to Apple’s OS X operating system, that well known user commands such as
the right-click are not there – this being something that Windows users have been used to stretching
back to the early 1990s with their exposure to Windows 3.1/3.11 onwards. Whilst on the face of it, such
matters may seem trivial, but they are not when a user just wants to perform his or her particular patterns
of use. Therefore, this part of the working hypothesis is also considered to be proved.
“…So, the problem is the dominance of Windows has been there for so long that it becomes so familiar
when using the system. Just things like right click which on the Mac is a little different and people find
that difficult, so why I said I don’t think they will be widespread adoption is that people are so familiar
with the shortcuts and how to navigate through, I think that has an influence on their decision…”
“…people like us who are brought up on Windows - we know Windows inside out, and then move to
another operating system have to learn everything again…”
“…On top of that, once the users are thoroughly trained, then they, there is reluctance on their part,
on their side to want to learn or migrate to something else…”
52
“…what I would say is that the application that is extensively used is press the button, wait for the
operating system to load. During that time, they must be looking around, looking at the phone, having
some coffee or something, they don’t care how it comes up…other than that I don’t think that anyone
is really noting what is an operating system. Back then they didn’t notice what was the operating system
and now also they are not knowing that…”
“…I would say that’s fairly accurate. Especially to a person, coming from my background…setting up
the Squid proxy, it did take some time to pick it up so there is some training, even though it was self-
learning. But if you are planning to deploy this, you know say in an office place you would need some
training to get used to it...Over the years, just because they are so use to one OS it could be down to
that…”
53
Chapter 5
Conclusion
“…The desktop hasn't really taken over the world like Linux has in many other areas, but just looking at my
own use, my desktop looks so much better than I ever could have imagined.…”
(Linus Torvalds, speaking at the Embedded Linux Conference, 2016) (Bhartiya, 2016)
The research question for this paper is “Why has Linux, despite its popularity on many platforms, failed
to be successful on the desktop?” To the satisfaction of the researcher, the two pronged research has
answered that question – it is almost completely due to the lack of popular desktop applications. On the
most popular desktop operating system platform (Microsoft Windows) it is Microsoft’s Office suite is
what one could term “the killer app”.
The idea of a platform either succeeding or failing based on the notion of a killer app was also raised
by West and Mace (2010), when they discussed the runaway success of the iPhone. In that particular
case the killer app was the Safari web browser because it could readily access and take advantage of
the estimated 1 trillion web pages available at no cost to users with desktop browsers, in an era when
mobile operators still operated a ‘walled garden’ of services – offering their own selective content whilst
charging their customers an additional subscription cost to access that content.
Linux’s lack of killer app on the desktop, and its overall lack of third party applications is considered
by the researcher to be the primary reason for its failure to succeed in the desktop market based upon
the findings of the research undertaken. This issue was discussed in the literature review section – citing
papers from the early 2000s (Dedrick and West, 2003, Dedrick and West, 2004, Kshetri, 2004 and
54
Decrem, 2004) and clearly nothing has changed in the intermediate years between then and the time of
writing, as evidenced by the data collected during the qualitative research.
The lack of third party applications has also been responsible for the failure of both Blackberry’s BB10
operating system (Reilly, 2016 and Spence, 2013) and Microsoft’s Windows Phone operating system
(Warren, 2015 and Thurrott, 2016) platforms – so this contention is backed by compelling real world
evidence. Specifically in the case of Blackberry, the operating system kernel was not versatile enough
to be successful on other platforms – whereas with Windows Phone, there is still an opportunity due to
Microsoft’s CEO Satya Nadella continuum (phone as a PC) strategy for Windows Phone stating
“…three years from now, I hope that people will look and say, ‘Oh wow, that’s right, this is a phone
that can also be a PC’…” (Thurrott, 2016). Ubuntu is also working on a similar approach with its Unity
8 UI that aims to converge both desktop and portable devices (Wallen, 2016).
The other main key reason for Linux’s desktop failure is that users in the general computing populace
have become used to Windows, and have evolved with Windows as it has evolved – this point became
readily apparent during the interview research undertaken. Microsoft gained its foothold on the desktop
long before the Linux kernel matured into version 1.0 on 14 March 1994, when Windows 3.1 was
released in 1992 (Gibbs, 2014). Windows 3.1 is still found in the wild, for example running the air
traffic control system for Orly Airport in Paris, France (Waugh, 2015 and Whittaker, 2015).
During the course of the interviews conducted, none of the respondents felt that Linux was
technologically inferior when compared to Windows, or other desktop operating system environments.
In most cases, those interviewed went on to praise Linux’s design and use of computational and memory
resources. Those opinions are corroborated by the experimental research undertaken that reached the
conclusion that overall (depending on the usage case scenario), both Windows 10 and Fedora 24 were
generally comparable performance wise.
55
It is the opinion of the researcher that Linux has succeeded on other platforms because it was there at
the beginning of those particular breakthroughs or advances in technology. This idea was substantiated
by 2 papers that were uncovered during the literature review (West and Dedrick, 2001 and West and
Dedrick, 2001) which discussed that often, new platforms become accepted when they are used in order
to support and underpin new usage case scenarios and it was specifically pointed out that with Linux,
its most common early usage cases were Internet centric – being used for web services, firewalls,
security and other such similar services – because it was there to be adapted to those particular types of
usage at the start of the prevalence of the Internet era.
Similarly, when Google, as part of the Open Handset Alliance, began development in late 2007 of the
Android operating system, with the Linux kernel at its heart (Industry Leaders Announce Open Platform
for Mobile Devices, 2007) it was at the cusp of the portable computing era discussed by Dukan et al
(2014) which was also established as a point during the literature review. Most of the interview
respondents based on their own subjective experiences also discussed the very same matter when being
questioned (refer to Discussion of Results portion of the Interview section).
This convergence of computing and communications was prophesised in 1977 by Koji Kobayashi, who
was the president of NEC, when he spoke of a time when both telecommunications and (presumably
mobile) computing would converge as a result of eventual improvements to the design and technology
of integrated circuits (Rumelt, 2011).
As Dukan et al (2014) explained, this era of portability has been driven on by low power consumption
processors that are used in mobile/tablet devices (dovetailing with Kobayashi), low power sensor
networks and the lightweight operating systems based on the Linux kernel that power them – and this
has now been extended to wearable devices such as smartwatches, as well as other IOT (or Internet of
Things) devices. In almost every case, these devices are running a Linux kernel.
56
Even Microsoft has been forced to recognise that Linux is a major force in the operating systems market.
Microsoft announced on 6 April 2016 as part of its Windows 10 Insider Preview Build 14316 (Aul,
2016) that users would be able to run Ubuntu’s BASH (Bourne Again Shell) natively on Windows. This
was enabled by Microsoft and Ubuntu working together to implement WSL (or Windows Subsystem
for Linux), allowing a user to run “…tens of thousands binary packages available in the Ubuntu archives
(using Bash on Ubuntu on Windows) …” (Vaughan-Nichols, 2016) – so that developers would continue
to use Windows. One interview respondents talked about Linux as a developer’s platform of choice
during the interviews.
In Appendix A, the theory of ‘Cumulative Selection’ (Dawkins, 1986) is discussed. Further credence
was lent to the theory’s applicability to technological amelioration during one of the interviews
undertaken – “…if you go and develop something, it makes sense to try and work off something which
already exists, rather than try and create it from scratch. So you know it’s more (if) you’re going to start
a new operating system and if something can give you a head start it would make sense to use that head
start, so in a way it makes sense to use the work others have done already if it’s helpful to you…”
Linux has been demonstrated to be a versatile, robust and adaptable operating system kernel. This
versatility and adaptability to almost any type of usage scenario has allowed for its successful
propagation across a multitude of platforms. In the case of the desktop, in the opinion of the researcher,
it was 3 years too late when kernel version 1.0 was released in 1994 – Windows 3.1 had already taken
hold and by 1994, when Linux was in a position to compete it was already too late and the opportunity
had gone.
Finally, due to the lack of a ‘killer app’, there was no compelling reason for all those existing Windows
users to switch to Linux. So, in conclusion, Linux’s failure on the desktop cannot be reversed, but with
the reducing relevancy of the desktop it is less of an issue, and now it is most likely to be other operating
systems that will, in the next 5 years, be searching for relevancy and trying to catch up with Linux.
57
Appendix A - The history of Unix and Unix-like operating systems
What is past is prologue.
(William Shakespeare, Tempest 2.1.253)
In order to better understand the current challenges faced by Linux when trying to make a breakthrough
on the desktop, it is important to consider Linux first within a historical context. In this section it is
contended that Linux is the logical culmination of a phenomenon known as ‘Cumulative Selection’.
This is the concept that as a result of a sequence of non-random, cumulative steps, a complex end-
product is derived from beginnings that were comparatively simple. (Dawkins, 1986)
In 1440, the printing press was invented by Johannes Gutenburg. His invention was an aggregation of
existing technology, which combined oil-based ink and screw presses that were used previously in order
to produce wine and olive oil (Shenkar, 2010). Therefore, had those existing technologies not yet
existed, Gutenburg obviously would not have been in a position to converge them together to create his
new device, which one could argue would turn out to be the most important invention in the history of
mankind.
Further strengthening this train of thought, according to Curwen and Whalley (2014), technological
amelioration usually advances via a series of generations (or part generations). They also point out that
such amelioration is usually achieved through better hardware or software, or even the combining of
both together.
Using the aforementioned ideas of both cumulative selection and technological amelioration, this
section hopes to successfully demonstrate and explain that Linux is an amalgam of all the useful
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop
Thesis - Linux on the desktop

More Related Content

What's hot

MES ISA-95 Anne Rissewijck
MES ISA-95 Anne RissewijckMES ISA-95 Anne Rissewijck
MES ISA-95 Anne Rissewijck
AnneRissewijck
 
Enterprise Resource Planning Unit 5 emerging trends on ERP
Enterprise Resource Planning Unit 5 emerging trends on ERPEnterprise Resource Planning Unit 5 emerging trends on ERP
Enterprise Resource Planning Unit 5 emerging trends on ERP
Ganesha Pandian
 
Intellectual Property Rights (IPR)
Intellectual Property Rights (IPR)Intellectual Property Rights (IPR)
Intellectual Property Rights (IPR)
Dr.K.Padmanabhan
 
Intelligent Manufacturing - A Smart Choice
Intelligent Manufacturing  - A Smart ChoiceIntelligent Manufacturing  - A Smart Choice
Intelligent Manufacturing - A Smart Choice
Sunil Wadhwa -MIE, EPLM (IIMC)
 
PID Temperature Control .pdf
PID Temperature Control .pdfPID Temperature Control .pdf
PID Temperature Control .pdf
MangeshNChidrawar
 
Intellectual Property and Trade Mark
Intellectual Property and Trade MarkIntellectual Property and Trade Mark
Intellectual Property and Trade Mark
sam ran
 
IT09 L24: MANAGEMENT INFORMATION SYSTEMS- Module 1
IT09 L24: MANAGEMENT INFORMATION SYSTEMS- Module 1IT09 L24: MANAGEMENT INFORMATION SYSTEMS- Module 1
IT09 L24: MANAGEMENT INFORMATION SYSTEMS- Module 1
Viju P Poonthottam
 
AVEVA’s ERM, Efficient Planning and Management of Project Resources
AVEVA’s ERM, Efficient Planning and Management of Project Resources AVEVA’s ERM, Efficient Planning and Management of Project Resources
AVEVA’s ERM, Efficient Planning and Management of Project Resources
AVEVA Group plc
 
INDUSTRIAL DESIGN AND ITS PIRACY
INDUSTRIAL DESIGN AND ITS PIRACYINDUSTRIAL DESIGN AND ITS PIRACY
INDUSTRIAL DESIGN AND ITS PIRACY
SHAHIDBASHIRMALIK
 
Industry 4.0 and applications
Industry 4.0 and applicationsIndustry 4.0 and applications
Industry 4.0 and applications
Umang Tuladhar
 
Enterprise Resource Planning Unit 4 post implementation on ERP
Enterprise Resource Planning Unit 4 post implementation on ERPEnterprise Resource Planning Unit 4 post implementation on ERP
Enterprise Resource Planning Unit 4 post implementation on ERP
Ganesha Pandian
 

What's hot (13)

MES ISA-95 Anne Rissewijck
MES ISA-95 Anne RissewijckMES ISA-95 Anne Rissewijck
MES ISA-95 Anne Rissewijck
 
Analytics in the Manufacturing industry
Analytics in the Manufacturing industryAnalytics in the Manufacturing industry
Analytics in the Manufacturing industry
 
Enterprise Resource Planning Unit 5 emerging trends on ERP
Enterprise Resource Planning Unit 5 emerging trends on ERPEnterprise Resource Planning Unit 5 emerging trends on ERP
Enterprise Resource Planning Unit 5 emerging trends on ERP
 
Intellectual Property Rights (IPR)
Intellectual Property Rights (IPR)Intellectual Property Rights (IPR)
Intellectual Property Rights (IPR)
 
Design act 2000
Design act 2000Design act 2000
Design act 2000
 
Intelligent Manufacturing - A Smart Choice
Intelligent Manufacturing  - A Smart ChoiceIntelligent Manufacturing  - A Smart Choice
Intelligent Manufacturing - A Smart Choice
 
PID Temperature Control .pdf
PID Temperature Control .pdfPID Temperature Control .pdf
PID Temperature Control .pdf
 
Intellectual Property and Trade Mark
Intellectual Property and Trade MarkIntellectual Property and Trade Mark
Intellectual Property and Trade Mark
 
IT09 L24: MANAGEMENT INFORMATION SYSTEMS- Module 1
IT09 L24: MANAGEMENT INFORMATION SYSTEMS- Module 1IT09 L24: MANAGEMENT INFORMATION SYSTEMS- Module 1
IT09 L24: MANAGEMENT INFORMATION SYSTEMS- Module 1
 
AVEVA’s ERM, Efficient Planning and Management of Project Resources
AVEVA’s ERM, Efficient Planning and Management of Project Resources AVEVA’s ERM, Efficient Planning and Management of Project Resources
AVEVA’s ERM, Efficient Planning and Management of Project Resources
 
INDUSTRIAL DESIGN AND ITS PIRACY
INDUSTRIAL DESIGN AND ITS PIRACYINDUSTRIAL DESIGN AND ITS PIRACY
INDUSTRIAL DESIGN AND ITS PIRACY
 
Industry 4.0 and applications
Industry 4.0 and applicationsIndustry 4.0 and applications
Industry 4.0 and applications
 
Enterprise Resource Planning Unit 4 post implementation on ERP
Enterprise Resource Planning Unit 4 post implementation on ERPEnterprise Resource Planning Unit 4 post implementation on ERP
Enterprise Resource Planning Unit 4 post implementation on ERP
 

Viewers also liked

BIS4430 - Endorse Me Back
BIS4430 - Endorse Me BackBIS4430 - Endorse Me Back
BIS4430 - Endorse Me BackAdam Lalani
 
Linux Ensim Reseller Doc
Linux Ensim Reseller DocLinux Ensim Reseller Doc
Linux Ensim Reseller Docdineshviswanath
 
The iPhone - why is it so successful
The iPhone - why is it so successfulThe iPhone - why is it so successful
The iPhone - why is it so successfulAdam Lalani
 
Solving the problems associated with Bitcoin mining
Solving the problems associated with Bitcoin miningSolving the problems associated with Bitcoin mining
Solving the problems associated with Bitcoin miningAdam Lalani
 
Backup and restore in linux
Backup and restore in linux Backup and restore in linux
Backup and restore in linux
Mohammed Yazdani
 

Viewers also liked (6)

BIS4430 - Endorse Me Back
BIS4430 - Endorse Me BackBIS4430 - Endorse Me Back
BIS4430 - Endorse Me Back
 
Linux Ensim Reseller Doc
Linux Ensim Reseller DocLinux Ensim Reseller Doc
Linux Ensim Reseller Doc
 
The iPhone - why is it so successful
The iPhone - why is it so successfulThe iPhone - why is it so successful
The iPhone - why is it so successful
 
mdx
mdxmdx
mdx
 
Solving the problems associated with Bitcoin mining
Solving the problems associated with Bitcoin miningSolving the problems associated with Bitcoin mining
Solving the problems associated with Bitcoin mining
 
Backup and restore in linux
Backup and restore in linux Backup and restore in linux
Backup and restore in linux
 

Similar to Thesis - Linux on the desktop

C and c++ in 5 days
C and c++ in 5 daysC and c++ in 5 days
C and c++ in 5 daysarnatar
 
Programming c and c++ in five days
Programming   c and c++ in five daysProgramming   c and c++ in five days
Programming c and c++ in five daysprincebhau
 
Intrusion Detection on Public IaaS - Kevin L. Jackson
Intrusion Detection on Public IaaS  - Kevin L. JacksonIntrusion Detection on Public IaaS  - Kevin L. Jackson
Intrusion Detection on Public IaaS - Kevin L. Jackson
GovCloud Network
 
Sample global linux operating system market report 2021
Sample global linux operating system market report 2021  Sample global linux operating system market report 2021
Sample global linux operating system market report 2021
Cognitive Market Research
 
Annotating Digital Documents For Asynchronous Collaboration
Annotating Digital Documents For Asynchronous CollaborationAnnotating Digital Documents For Asynchronous Collaboration
Annotating Digital Documents For Asynchronous Collaboration
Claire Webber
 
Flask: Flux Advanced Security Kernel. A Project Report
Flask: Flux Advanced Security Kernel. A Project ReportFlask: Flux Advanced Security Kernel. A Project Report
Flask: Flux Advanced Security Kernel. A Project Report
Luis Espinal
 
The_Linux_Users_Guide.pdf
The_Linux_Users_Guide.pdfThe_Linux_Users_Guide.pdf
The_Linux_Users_Guide.pdf
Mertin2
 
Botnet Detection and Prevention in Software Defined Networks (SDN) using DNS ...
Botnet Detection and Prevention in Software Defined Networks (SDN) using DNS ...Botnet Detection and Prevention in Software Defined Networks (SDN) using DNS ...
Botnet Detection and Prevention in Software Defined Networks (SDN) using DNS ...
IJCSIS Research Publications
 
Master's Thesis
Master's ThesisMaster's Thesis
Master's Thesis
Sridhar Mamella
 
Linux kernel 2.6 document
Linux kernel 2.6 documentLinux kernel 2.6 document
Linux kernel 2.6 document
Stanley Ho
 
Agentless Monitoring with AdRem Software's NetCrunch 7
Agentless Monitoring with AdRem Software's NetCrunch 7Agentless Monitoring with AdRem Software's NetCrunch 7
Agentless Monitoring with AdRem Software's NetCrunch 7Hamza Lazaar
 
Rapport stage ingenieur (2017)
Rapport stage ingenieur (2017)Rapport stage ingenieur (2017)
Rapport stage ingenieur (2017)
Mohamed Boubaya
 
Systems se
Systems seSystems se
Systems se
Franco Bressan
 
Linux wireless
Linux wirelessLinux wireless
Linux wirelesscri fan
 
Brian.suda.thesis
Brian.suda.thesisBrian.suda.thesis
Brian.suda.thesis
Aravindharamanan S
 

Similar to Thesis - Linux on the desktop (20)

C and c++ in 5 days
C and c++ in 5 daysC and c++ in 5 days
C and c++ in 5 days
 
Programming c and c++ in five days
Programming   c and c++ in five daysProgramming   c and c++ in five days
Programming c and c++ in five days
 
Knapp_Masterarbeit
Knapp_MasterarbeitKnapp_Masterarbeit
Knapp_Masterarbeit
 
Intrusion Detection on Public IaaS - Kevin L. Jackson
Intrusion Detection on Public IaaS  - Kevin L. JacksonIntrusion Detection on Public IaaS  - Kevin L. Jackson
Intrusion Detection on Public IaaS - Kevin L. Jackson
 
Sample global linux operating system market report 2021
Sample global linux operating system market report 2021  Sample global linux operating system market report 2021
Sample global linux operating system market report 2021
 
Annotating Digital Documents For Asynchronous Collaboration
Annotating Digital Documents For Asynchronous CollaborationAnnotating Digital Documents For Asynchronous Collaboration
Annotating Digital Documents For Asynchronous Collaboration
 
Flask: Flux Advanced Security Kernel. A Project Report
Flask: Flux Advanced Security Kernel. A Project ReportFlask: Flux Advanced Security Kernel. A Project Report
Flask: Flux Advanced Security Kernel. A Project Report
 
The_Linux_Users_Guide.pdf
The_Linux_Users_Guide.pdfThe_Linux_Users_Guide.pdf
The_Linux_Users_Guide.pdf
 
diss
dissdiss
diss
 
Botnet Detection and Prevention in Software Defined Networks (SDN) using DNS ...
Botnet Detection and Prevention in Software Defined Networks (SDN) using DNS ...Botnet Detection and Prevention in Software Defined Networks (SDN) using DNS ...
Botnet Detection and Prevention in Software Defined Networks (SDN) using DNS ...
 
MicroFSharp
MicroFSharpMicroFSharp
MicroFSharp
 
web_based_ide
web_based_ideweb_based_ide
web_based_ide
 
Master's Thesis
Master's ThesisMaster's Thesis
Master's Thesis
 
Linux kernel 2.6 document
Linux kernel 2.6 documentLinux kernel 2.6 document
Linux kernel 2.6 document
 
Agentless Monitoring with AdRem Software's NetCrunch 7
Agentless Monitoring with AdRem Software's NetCrunch 7Agentless Monitoring with AdRem Software's NetCrunch 7
Agentless Monitoring with AdRem Software's NetCrunch 7
 
Rapport stage ingenieur (2017)
Rapport stage ingenieur (2017)Rapport stage ingenieur (2017)
Rapport stage ingenieur (2017)
 
Systems se
Systems seSystems se
Systems se
 
Linux wireless
Linux wirelessLinux wireless
Linux wireless
 
document
documentdocument
document
 
Brian.suda.thesis
Brian.suda.thesisBrian.suda.thesis
Brian.suda.thesis
 

Thesis - Linux on the desktop

  • 1. 1 CCM4902 – Postgraduate Project Linux on the desktop: A study into why it has failed to succeed in capturing desktop market share Adam Lalani M00549948 Supervisor: Santhosh Menon 27 September 2016 "A thesis submitted in partial fulfilment of the requirements for the degree of Master of Science in Computer Network Management."
  • 2. 2 Table of Contents Abstract...................................................................................................................................................4 List of Figures.........................................................................................................................................5 List of Tables ..........................................................................................................................................5 Introduction.............................................................................................................................................6 Background.........................................................................................................................................6 Problem Statement..............................................................................................................................9 Research Objectives..........................................................................................................................12 Approach...........................................................................................................................................13 Literature Review..................................................................................................................................14 Timeline............................................................................................................................................14 Process (Method for collection) – sources, keywords ......................................................................15 Review of Topics..............................................................................................................................18 Conclusions.......................................................................................................................................22 Output – a simple definition, a conceptual model (Dimensions)......................................................27 Literature Gap...................................................................................................................................27 An Experimental Comparison of Linux and Windows.........................................................................30 Experimental Procedure....................................................................................................................31 Stage 1 - Installation .........................................................................................................................32
  • 3. 3 Stage 2 – Start Up / Shutdown..........................................................................................................32 Stage 3 – I/O Intensive Operations ...................................................................................................33 Stage 4 – Processor Intensive Operations.........................................................................................33 Stage 5 – Power Management...........................................................................................................34 Presentation of Results......................................................................................................................34 Discussion of Results........................................................................................................................36 Interviews with IT Professionals ..........................................................................................................38 Interview Procedure..........................................................................................................................42 Discussion of Results........................................................................................................................43 Conclusion ............................................................................................................................................53 Appendix A - The history of Unix and Unix-like operating systems ...................................................57 Appendix B - Interviews.......................................................................................................................76 Interview 1 – Robert Fitzjohn...........................................................................................................76 Interview 2 – Prasad KM ..................................................................................................................86 Interview 3 – Sanjay Banerjee ..........................................................................................................99 Interview 4 – Renjith Janardhanan..................................................................................................110 Interview 5 – Glen Coutinho...........................................................................................................126 References...........................................................................................................................................136
  • 4. 4 Abstract The Linux kernel has been wildly successful since its creation in 1991 by Linus Torvalds. Propelled forward by the diffusion of the Internet and portable devices, Linux is now used in over 1.4 billion devices – powering inter alia smartphones, tablets, the social media juggernaut Facebook, nuclear submarines and the International Space Station. Despite this success, it is only used on just 1.74% of desktop PCs. Two lines of inquiry were followed to ascertain the reason(s) for Linux’s lack of success on the desktop – firstly, an experimental comparison between Linux Fedora 24 and Windows 10 was undertaken, in order to demonstrate that the lack of market share was not as a result of deficiencies of the operating system itself, and secondly, qualitative interviews were conducted with 5 IT industry professionals with a combined 96 years of experience – responsible between them for the purchasing, configuration, support and usage of tens of thousands of PCs during their careers. The experimental comparison proved that the performance and functionality of Linux is similar enough to Windows to be discounted as a factor for its lack of adoption on desktop PC, whilst the qualitative interviews established that the fundamental reason for the lack of success was due to the lack of a ‘killer app’. Windows has Microsoft Office, but such a ‘killer app’ does not exist on the Linux platform. Furthermore, the Linux kernel came too late to become widely prevalent during the desktop PC explosion that began in the early 1990s, whereas its availability at the time of the rise of the Internet era and the portability revolution allowed it to dominate those market spaces. In the case of the desktop, it was the right kernel at the wrong time.
  • 5. 5 List of Figures Figure 1 – Number of Scholarly and Peer-Review Papers on Summon, timeline based......................14 Figure 2 – Keywords Established From Content Analysis...................................................................15 Figure 3 – Classification of final set of publications for literature review ...........................................18 Figure 4 - Output – a simple definition, a conceptual model (Dimensions).........................................27 List of Tables Table 1 – Desktop Operating System Market Share as at February 2016 (netmarketshare.com) ........10 Table 2 – Mobile/Tablet Operating System Market Share as at February 2016 (netmarketshare.com)11 Table 3 – Final set of publications for literature review.......................................................................17 Table 4 – Content overview of papers used for literature review relating to Linux’s architecture.......19 Table 5 – Content overview of papers used for literature review comparing Linux to Windows/other operating systems..................................................................................................................................20 Table 6 – Content overview of papers used for literature review concerned with the adoption of Linux/other open source software.........................................................................................................21 Table 7 – Results of the 5 experimental stages.....................................................................................35 Table 8 - Qualitative Interviews - Definitions and Measurements .......................................................42
  • 6. 6 Chapter 1 Introduction Background “…I'm doing a (free) operating system (just a hobby, won't be big and professional like gnu) for 386(486) AT clones…” (Linus Torvalds, Usenet posting 25 August 1991) (Peng et al, 2014) On the 25th August 1991, an unknown 21 year old student at the University of Helsinki in Finland, named Linus Torvalds, posted on a Usenet forum that he was working on creating and releasing an operating system kernel based on Minix, a Unix-like operating system (Malone and Laubacher, 1999). His stated intention was that it would be created for hobbyist purposes, and would not be intended for professional use. Minix had been created by Andrew S Tanenbaum (Tanenbaum, 1987), in order to better assist students that he taught at Vrije Universiteit Amsterdam in the Netherlands about operating systems. The idea to create his own operating system came about when Tanenbaum was teaching his students how to use AT&T’s Unix version 6. As Tanenbaum stated “The bean counters at AT&T didn’t like this: having every computer science student in the world learn about its product was a horrible idea. Version 7 came out with a license that said, “thou shalt not teach,” so you couldn’t teach version 7 Unix anymore” (Severance, 2014). So he created his own operating system that was similar enough in its principles to Unix version 7 that he was able to teach unfettered by AT&T’s draconian licensing constraints.
  • 7. 7 Minix had become the academic researcher’s platform of choice due its readily available source code that could be examined and changed easily (if deemed necessary) - due to it being written in C, having a system call interface that worked exactly like Unix version 7, and whilst it was a fully-fledged operating system it was lightweight enough for one person to quickly absorb and comprehend (Mull and Maginnis, 1991). Torvalds was himself an avid Minix user. In 1991, he purchased for himself a new Intel 80386 based- PC, but he soon realised that Minix could not take advantage of the enhanced protected mode (also known as protected virtual address mode) of the newly released processor, so he took it upon himself to write his own operating system kernel so that he could do so (Dettmer, 1999). His original kernel was just a basic task-switching kernel – all it could do was display a message from each of two running processes. Minix was used to compile the kernel and provide a file system. Torvalds managed to post a semi-complete version of his operating system source code onto an FTP site in November 1991 (Wiegand, 1993). The following year, Torvalds combined his work with another on-going open-source project entitled GNU - which stood for GNU is not Unix (Casadesus-Masanell and Ghemawat, 2006). GNU, was the brainchild of Richard Stallman (Hars and Ou, 2001), who worked at MIT (Massachusetts Institute of Technology), and was under development in order to create an entirely free Unix-like operating system. By 1992, the GNU project had yet to complete its own kernel (Stallman, 1998), but had completed many other components required for an operating system, which included compilers, a command shell, libraries, a windowing system and text editors. Torvalds combined his kernel with the readily and freely available GNU programs to create a fully-fledged operating system (Bokhari, 1995). Linux was made available under the GNU General Public License. This license allows the freedom to any end user to have access to and be able to modify the software source code (as long as it is made clear the source code has been modified), or distribute (and if so desired - charge for) copies of the software. Additionally, the software can be used in new programs – modified or unmodified, and that
  • 8. 8 if that is the case, the recipient of the software is granted the same freedom as the distributor (The GNU General Public License v3.0 – GNU Project – Free Software Foundation, 2007) The computing landscape in the early 1990s was somewhat different to what it is today. In a May 1990 article in IEEE’s Computer magazine entitled ‘Recent Developments in Operating Systems’ (Boykin and LoVerso, 1990) it was noted that generally operating systems of the time fell into one of two categories – the first being referred to as mere “loaders” of programs (such as MS-DOS and DR’s CP/M) with limited support for additional peripherals, and the second being of a more complex variety that could offer access to manifold devices on a concurrent basis (examples include AT&T’s Unix and Data General’s AOS/VS). However, mainly due to the rise to prominence of Ethernet networking, commoditised CPUs and other significant hardware improvements, future operating systems would have to address newly evolving requirements to specifically power graphical user interface based workstations that were interconnected using local area networks (LANs). Whilst Torvalds had begun work on his kernel, at the same time other operating systems began to appear that could also harness the power of Intel’s 80386 processor, such as IBM’s OS/2 and Microsoft’s Windows NT - additionally, at this point in time Unix had just become the first major ‘machine independent’ operating system, enabling it to run on different hardware platforms. (Wilkes, 1992). All of this evolution was being driven by the aforementioned recently evolving resource-hungry usage scenarios like networking and graphical/multimedia applications (Cheung and Loong, 1995) Just a few weeks before Torvald’s Usenet post, the World Wide Web was first made available to the public on the Internet (Carbone, 2011). Undoubtedly, the advent of the Internet era would have also contributed to the necessity for both hardware and operating system improvements. This line of argument can be strengthened by Curwen and Whalley (2014), who wrote that changes in technology generally move forward via a series of generations or part generations, and that these changes are achieved either through better hardware, software, or a combination of the two. Indeed, as it has already been demonstrated, Torvalds wrote his kernel to harness the power of his newly purchased hardware
  • 9. 9 that Minix was not able to do. Furthermore, West and Dedrick (2001) assert that the rise of Linux’s prominence is as a direct result of the Internet. Problem Statement Almost 25 years after that initial Usenet post, the kernel created by Torvalds, which later became known as Linux, has gone on to become the number one most used operating system kernel in the world (The Linux Foundation, no date). Linux finds itself being used for such diverse applications as the running of nuclear submarines (Claiborne Jr, 2001), the International Space Station (Ortega, 1999), over 1.4 billion portable devices (Vincent, 2015), as well as powering and underpinning the social media juggernaut Facebook (Zeichick, 2008) inter alia. Whilst all of this has shown that the Linux kernel is versatile and has many usage cases, there is one cross section of the computing landscape that Linux has, as of the time of writing, not managed to successfully permeate – the desktop computing space. For the purposes of this paper, the term desktop computing is defined as traditional desktop or laptop PCs that utilise the x86 instruction set, and therefore will exclude servers, mobile devices - such as tablets or smartphones, and games consoles. Operating system market share data for February 2016 is presented in Table 1 for desktop operating systems, and Table 2 for mobile operating systems. This data was provided by netmarketshare.com, a website that collects data from the web browsers of individual unique devices that visit one of over 40,000 websites in their content network, as well as from over 430 referral sources including search engines, enabling them to provide statistics on different web browsers being used, as well as the operating system(s) used by those browsers (Can you explain the Net Market Share methodology for collecting data?, 2016).
  • 10. 10 Operating System Total Market Share Windows 7 52.41% Windows 10 12.31% Windows XP 11.34% Windows 8.1 10.13% Mac OS X 10.11 3.57% Windows 8 2.56% Mac OS X 10.10 2.27% Linux 1.74% Windows Vista 1.68% Mac OS X 10.9 0.86% Mac OS X 10.6 0.35% Mac OS X 10.8 0.29% Mac OS X 10.7 0.29% Windows NT 0.10% Mac OS X 10.5 0.06% Mac OS X 10.4 0.02% Windows 2000 0.01% Windows 98 0.01% Mac OS X (no version reported) 0.00% Table 1 – Desktop Operating System Market Share as at February 2016 (netmarketshare.com)
  • 11. 11 Operating System Total Market Share Android 59.65% iOS 32.28% Windows Phone 2.57% Java ME 2.4% Symbian 1.57% Blackberry 1.45% Samsung 0.05% Kindle 0.02% Bada 0.01% Windows Mobile 0.00% LG 0.00% Table 2 – Mobile/Tablet Operating System Market Share as at February 2016 (netmarketshare.com) In addition to the data presented in Table 1 and Table 2, w3techs.com (Usage statistics and market share of Unix for websites, 2016) states that 36.2% of the top 10 million websites (based on rankings collated by Alexa, a company belonging to Amazon.com), are powered using the Linux kernel. Therefore, using those data sources as evidence, it is clear that Linux has failed to capture desktop market share whilst it has been a proven success on mobile devices and mission-critical web servers on the Internet. The intention of this paper is to perform an exploratory research in order to establish the reasons for Linux’s failure to penetrate the desktop computing space. It will be demonstrated that Linux is comparable in features and performance to the other popular desktop operating systems that it is ranked against in Table 1, so it stands to reason that there must be other reasons for this disparity in market share versus other market segments, which this paper will attempt to uncover.
  • 12. 12 The working hypothesis is that Linux has failed to achieve a sizeable portion of the desktop operating system market because of a multitude of reasons, stated below:  It is not preinstalled on new PCs that are sold  There are too many Linux distributions available, which has led to fragmentation  Different package managers are used by different distributions  Multiple desktop GUI environment choices  A perceived lack of user friendliness and a steep learning curve  Deficiencies in hardware support, especially for graphics adapters  Paucity of available software/native versions of popular applications Research Objectives The research objectives of this paper will be:  Looking at the history of Linux from the evolutionary perspective of Unix and other Unix-like operating systems (refer to Appendix A)  Experimentation with various competing operating systems to better understand the difficulties that might be faced to get a user up and running  Establishing the causes for Linux on the desktop’s failure through qualitative interviews  Understanding the reasons for Linux’s success on other non-desktop hardware platforms  Attempting to discover if it is possible to reverse the trend, and how it might be reversed
  • 13. 13 Approach In order to prove or disprove the working hypothesis, and to establish the reasons for its success on other non-desktop platforms, Linux will be compared to other operating systems through the use of experimentation - with installation and configuration, through the creation of a desktop base image across each operating system. The working hypothesis will be interrogated further through qualitative interviews with a number of IT professionals known to the researcher. Once proved or disproved, finally an answer will be sought to understand if there is a possibility to reverse the trend, and if so, how it might be done. Additionally, in Appendix A, the history of Unix and Unix-like operating systems is presented to demonstrate how Linux has evolved into what it is at the time of writing
  • 14. 14 Chapter 2 Literature Review “…The time will come when diligent research over long periods will bring to light things which now lie hidden…” (Seneca, Natural Questions) (Ellis, 1998) Timeline As was discussed in the introduction, Torvalds’s first version of the Linux kernel was released online in November 1991, so the literature review timeline began from 1991 to the present. Initially, a very loose preliminary search was performed using Summon – the University of Middlesex’s database of publications for the keyword ‘Linux’. As demonstrated in Figure 1, the most recent 10 years or so provides quite a significant body of research on Linux in general to be delved into. Figure 1 – Number of Scholarly and Peer-Review Papers on Summon, timeline based 0 1000 2000 3000 4000 5000 6000 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016
  • 15. 15 Process (Method for collection) – sources, keywords Two major databases were used for the literature review on the research topic. The first was Summon, a database capable of searching many library resources at once, which is provided by the University of Middlesex to its students. The second database that was used to uncover relevant articles was Google Scholar. It was felt that these two databases would yield sufficient data for the literature review phase of this paper. Only top quality journal publications and articles, conference proceedings, as well as peer reviewed magazines were utilised. So that a collection of pertinent keywords could be established for a more well focused and defined search, an analysis of content was carried out against the websites of prominent organisations that have a strong involvement in the contemporary world of Linux and its ongoing propagation as a successful operating system. The organisations that were analysed were the Linux Foundation, IBM, Dell, Fedora, Red Hat, Ubuntu, and DistroWatch. The initial findings from the keyword analysis are shown in Figure 2. Figure 2 – Keywords Established From Content Analysis 0 5 10 15 20 25 OpenSource Costs/Free Cloud Servers XWindows Ubuntu/Canonical Community/Communities Mainframe Performance Security OperatingSystem Enterprise/Corporate Desktop Windows Distribution CPU/Processor RedHat Suse Kernel KDE FreeSoftware Workstation Terminal/CommandLine GNOME XFree86 Licensing Apache Hardware Architecture GNU POSIX Repositiory Unix
  • 16. 16 To further filter the keywords in order to bring forward results from the database searches that were pertinent to this paper’s line of enquiry, all keywords that yielded less than five results were disregarded, which left a shortlist of thirteen keywords. Surprisingly, ‘kernel’ came some way down the list. Some of the remaining keywords on the shortlist such as ‘Mainframe’, ‘Cloud’ and ‘Ubuntu/Canonical’ were removed as they were too far removed from the subject matter that this paper is concerned with. A shortlist of ten keywords would be used to search the selected databases (in conjunction with the associated word ‘Linux’):  Open source  Cost / Free  Server  X Windows  Community / Communities  Performance  Security  Operating System  Enterprise / Corporate  Desktop Search results would only be considered if they fell under the remit of computer science, and yet that still yielded a combined total of 74,275 publications on Summon. As this was such a broad number of papers, that in all likelihood would be mostly irrelevant, it was therefore decided to further reduce the number of keywords to only include costs/free, operating system, enterprise/corporate and desktop. Whilst a significant reduction had been made, a total of 27,117 papers remained. Therefore, combinations of the keywords were used to achieve a more manageable number of papers to be reviewed. Eventually, a finalised total of 45 papers were collated that were ascertained to be relevant to this research paper. These are summarised as per Table 3 and Figure 3.
  • 17. 17 Type of Publication Publication Title Number of articles Journal 1 Elsevier – Journal of Computers and Security 1 2 ACM - SIGOPS Operating Systems Review 2 3 Elsevier – Journal of Information Economics and Policy 1 4 Journal of Management Science 1 5 Elsevier – Journal of Systems and Software 2 6 MIS Quarterly 1 7 Library Hi Tech 2 8 Computing in Science and Engineering 1 9 International Digital Library – Perspectives 1 10 Journal of Academic Librarianship 2 11 Elsevier – E-Commerce, Internet and Telecommunications Security 1 12 Springer – Knowledge, Technology and Police 1 13 ACM – Transactions on Security 1 14 Journal of Corporate Accounting and Finance 1 Magazine (Peer- Reviewed) 1 IEEE Security and Privacy 1 2 ACM Queue 1 3 Elsevier – Network Security 1 4 IEEE Software 8 5 SSM IT Professional 1 6 The CPA Journal 1 7 Library Journal 1 8 IEEE Computer 2 9 Elsevier – Computer & Security Report 1 Conference Proceedings 1 IEEE Proceedings 8 2 Proceedings of the Workshop on Standard Making – A Critical research frontier for Information Systems 1 Other 1 Forrester Research 1 Table 3 – Final set of publications for literature review
  • 18. 18 Figure 3 – Classification of final set of publications for literature review Review of Topics The final set of publications that were used for the literature review were collated and briefly summarised to better analyse their topics, content, direction, lines of inquiry / research, and how they would fit with the research to be performed in this paper. The papers have been broken down into three broad areas – the first covers Linux architecture (Table 4), second are papers that involve comparison to Windows and other operating systems (Table 5) and finally publications that are focused on adoption of Linux/other open source software or operating systems (Table 6). Journal, 18 Magazine, 17 Conference Proceedings, 9 Other, 1 Classification of final set of publications for literature review
  • 19. 19 Linux Architecture Paper Description Lu, et al. (2014) In depth study into Linux file system evolution and its features – demonstrating the Ext4 file system is ruggedised enough for use. Harji, et al. (2011) Demonstrates that different kernel versions of Linux have major performance variations between them. Dukan, et al. (2014) An analysis of performance versus power consumption between Intel/AMD and ARM based processors using Linux – concluding that type of processor architecture is becoming irrelevant – an indication into the future direction of computing. Xiao and Chen (2015) Comprehensive study into potential logging overhead issues when using Linux when not using adaptive auditing. Thiruvathukal (2004) Further evidence of distribution fragmentation in this paper, as well as a look at some of Linux’s perceived weaknesses – such as hardware support and binary package dependencies. Radcliffe (2009) Paper that comparatively examines how access to hardware is controlled using Linux, FreeBSD and Windows. Shankar and Kurth (2004) An evaluation into security implications for open-source code, such as is used in the Linux kernel. Harji, et al. (2013) Discussion into the complexity and problems encountered during Linux kernel upgrades. Table 4 – Content overview of papers used for literature review relating to Linux’s architecture
  • 20. 20 Comparison to Windows / Other Operating Systems Paper Description Bean, et al. (2004) A paper concerned with establishing that open source operating systems and software in general are primed to perform akin to that of their proprietary counterparts Chaudri and Patja (2004) Shows that whenever possible, Microsoft has sought to perpetuate their operating system monopoly through the use of litigation. Macedonia (2001) Focussed on Linux’s inability to compete with Windows as the PC gamer’s platform of choice and the reasons why. Massey (2005) This article discusses a 2005 open source software conference, where claims were made that 2005 would be the year that Linux would break through on the desktop. Goth (2005) Talks about how Linux and other open source software has matured to rival commercial software, and that how to move to open source is more important than whether or not. Sanders (1998) Highlights how Microsoft assimilates functionality of emerging software to aggressively dominate the software industry to the detriment of others. Hilley (2002) Establishes that as early as 2002, governments and government agencies across the world begin Linux adoption programs, much to Microsoft’s chagrin. Dougherty and Schadt (2010) A case study that demonstrates that widely used Windows applications have Linux based alternatives, whilst cautioning that some software may never have an alternative. Coyle (2008) Shows where Linux lags behind versus its contemporaries, and that there are hundreds of distributions to choose from. Kshetri (2007) Argues that software piracy (principally of Windows) takes away potential Linux market share on the desktop. Dedeke (2009) Proposes the idea that Linux is not necessarily better than Windows from a vulnerability perspective. Tsegaye and Foss (2004) A comparative study into both Windows and Linux device driver implementation, praising Windows’ better ability to work on a plug and play basis versus Linux. Salah, et al. (2013) A review and analysis in to security concerns when deploying commoditised operating systems. Casadesus- Masanell and Ghemawat (2006) Provides a close look at what motivates contributors to Linux and other open source development projects, and how Linux’s availability causes competition like Microsoft to reduce its pricing to remain competitive. West and Dedrick (2001) Study into the rise of Linux – primarily focused upon the motivations of suppliers and buyers of complimentary assets as well as how Microsoft reacted to this changing of the landscape. Stange (2015) Highlights that in an IT environment it is common to find a mixture of different operating systems being used. Table 5 – Content overview of papers used for literature review comparing Linux to Windows/other operating systems
  • 21. 21 Adoption – Reasons for, Costs, Drawbacks, Risks, Benefits Paper Description Giera and Brown (2004) A comprehensive research into the costs, drawbacks and risks associated with migrating to open source software – specifically the differences versus commercial software. Young (1999) An article that argues that the claim that Linux systems potentially have a lower cost of ownership across the lifecycle may be naïve despite the fact the operating system is free of charge. Lewis (1999) Asserts that open source software does not become mainstream unless commercialised. Leibovitch (1999) An early case study into an all Linux enterprise – weighing up Linux’s strengths against its barriers to its acceptance. Despite being an older paper, the same arguments appear to hold true against contemporary literature, making it a valuable primary source. Ven, et al. (2008) Examination of advantages and disadvantages of open source software adoption – specific to Linux. Gwebu and Wang (2010) An exploratory study of the user perceptions of open source software adopters – to see if there are different mind-sets involved in those who decide to adopt. Auger (2004) Discussed how older hardware can be repurposed by using Linux, by stripping away unnecessary features and overhead, thus leading to cost savings. Maddox and Putnam (1999) A paper highlighting both positives and negatives to Linux adoption, mainly from a cost centric view. McClaren (2000) Paper that discussed the notion that Linux is essentially an unsupported operating system. Delozier (2008) Another paper that discussed Linux fragmentation – many distributions and desktop environments – however does positively propose software alternatives to commercial applications on other platforms. Chau and Tam (1997) An exploratory study into factors that impact the adoption of open source software and systems. Kirby (2000) More evidence of Linux distribution fragmentation, discussion into Linux supporting various hardware platforms, and being optimal when looking to extend the useful life of old hardware. Sole focus on cost of software is not the only utility of Linux. Mustonen (2002) Research undertaken into the economic logic of Linux and other open source applications. West and Dedrick (2001) Conference paper that establishes the reasons for the rise of Linux, and presents research into the adoption motivation of various organisations between 1995 and 1999. Anand (2015) Another paper that discusses fragmentation in Linux desktop GUIs and distributions, but also establishes positive reasons for using a Linux distribution. Dedrick and West (2004) Exploratory study into the various factors influencing open source platform adoption, and the processes used to evaluate and then implement such technologies. Ajila and Wu (2007) Empirical study into factors that cause an effect on open source software development economics, as well as understanding the steps involved in open source software adoption. Kshetri (2004) Comparison of macro and micro influences in decision to adopt Linux in developing nations – asserts a lack of interoperable software is an issue requiring attention. Dedrick and West (2003) Looks at the consequences of adoption of standards – from the standpoint of technology, environmental and organisational views. Bokhari (1995) Establishes that a high level of system administrator competency is required to support Linux in a networked environment. Decrem (2004) Article that looks at obstacles to Linux’s broader adoption on the desktop – several of which are established. Table 6 – Content overview of papers used for literature review concerned with the adoption of Linux/other open source software
  • 22. 22 Conclusions As pointed out by Stage (2015), it has become commonplace to find an amalgam of different operating systems within an IT operation. In the past, supporting Unix-like operating systems (such as Linux), in a networked environment, necessitated the need for high level system administrator skills and proficiency in order to support and maintain both system and network stability (Bokhari, 1995). Despite this, towards the latter half of the 1990s, organisations began to sense that there was value in exploring the possible adoption of open source operating system software primarily to avoid the constrictions that were imposed by the use of proprietary software (Chau and Tam, 1997). Perhaps sensing this, at around the same time, Microsoft had begun to embark upon an aggressive strategy of incorporating any well-received new features introduced by other software companies into their own Windows operating system with the overall effect (and probable motivation) of removing most of their competitors from the market (Sanders, 1998). Around this time, cost perspective implications began to arise in discussions. Some held the position that in order to succeed, Linux would have to become commercialised and become a chargeable product (Lewis, 1999). Others began to debate that whilst Linux is free of charge, the actual total cost of ownership makes it more expensive than Windows – in the main due to having to spend more on software maintenance and support than one would spend on Windows – fitting quite logically into Bokhari’s ‘administrator skills proficiency’ requirement previously mentioned (Young, 1999). This argument is further elaborated upon by McLaren (2000) who states that whilst being free is Linux’s biggest selling point, it is as an operating system that is essentially unsupported. However, arguing against Young, in the same edition of the same publication, Kirby stated, that being just concerned with cost(s) detracts from the utility of Linux, especially as it can be used to increase the longevity of hardware beyond the traditional vendor supported lifecycles, and would therefore offer an
  • 23. 23 advantage against its commercially available operating system rivals (Kirby, 2000). The same benefit was also highlighted by Auger (2004). Around the same time (in the late 1990s), the earliest case studies of the corporate use of Linux began to manifest. One such case study, focused on a Canadian start up, called Starnix, that adopted Linux due its primary technical strengths of scalability, flexibility and reliability (Leibovitch, 1999). Again the topic of support was highlighted as an impediment to the widespread adoption of Linux, although in the case of Starnix it was not an issue because of the Unix-based background of its team that provided the necessary complimentary skill set required to support their set-up. At a similar juncture, papers began to appear that charted, studied and analysed the rise of Linux (West and Dedrick, 2001 and West and Dedrick, 2001). It has already been touched upon in the introduction that Linux’s rise to prominence is as a direct result of the Internet. The two aforementioned papers discuss that often, new platforms (be it hardware or software) only become acceptable to IT departments, ordinarily resistant to change, when these platforms are used to introduce new usage case scenarios – and specifically in the case of Linux, its most common early usage cases were Internet centric – being used for web services, firewalls, security and other such similar services. Once more, those papers are in agreement with the sentiments already written that relate to Linux requiring support staff of technical sophistication, the cost saving benefits through the usage of pre- existing hardware and the need for industry giants such as IBM and HP to throw their weight behind the commercialisation of the operating system in order to be better positioned against Microsoft, which by 2001 had begun to happen. Dedrick and West also discuss the notion of complimentary assets – i.e. so that in order for Linux to gain traction, those industry giants must provide a complimentary basket of both hardware and software, which would theoretically in turn encourage more widespread adoption of both, in a hand-in-hand fashion (also discussed by Decrem, 2004). They also warned against the concept of “forking” – essentially the lack of the adoption of a common standard, becoming an impediment to Linux adoption.
  • 24. 24 This concept of fragmentation (or as Dedrick and West put it “forking”) is in all likelihood one of the central contributors to the complication and confusion of Linux adoption. Many papers have highlighted the fact that there are a myriad of available Linux distributions, and due to the availability of countless flavours of Linux, it makes organisations more loath to adopt it (Kirby, 2000) (Anand, 2015) (Delozier, 2008) (Coyle, 2008) (Thiruvathukal, 2004) (Decrem, 2004). Subsequent research was carried out in 2003 and 2004 aimed at investigating the reasons that might influence the adoption of open source software (Dedrick and West, 2003) (Dedrick and West, 2004). Their research ascertained that the choice of server software did not affect how the general employee populace viewed their computing experience – one interview respondent said “(the users) don’t know, (and) don’t care” – meaning that so long as the underlying platform is not obvious it has little effect to the end user. Once more the need for complimentary Linux skills was highlighted as an obstacle to adoption. The most prominent issue was the potential inability to run third party applications on Linux (also corroborated by Kshetri, 2004 and Decrem, 2004). However, several advantages were cited namely the reduction of software costs, and the ability to repurpose otherwise obsolete hardware – all positives already discussed. Although, such adoption decisions are said to be made on an infrequent basis, probably due to the aforementioned resistance to change. Some additional adoption factors were uncovered in the same two papers (Dedrick and West, 2003) (Dedrick and West, 2004) which were the topics of ‘slack’ and ‘innovation’. With innovation, the inference being that following a path of innovation leads to the earlier adoption of new technology, and that such early adoption is a direct result of the strategy laid out by the business, and how IT is aligned to it. So, if IT is of central strategic importance to a business, it will lead to earlier adoption of technology such as Linux. Looking more closely at the concept of slack, for an organisation that has additional IT department human resources capacity but limited financial spending power, it begins to make more sense to use the
  • 25. 25 human resource slack to save money by using a free operating system, because this additional human capacity allows the time and effort for experimentation with new technology (named by Dedrick and West as “trialability”) and to learn and therefore fill in the skillset gap, making it no longer an obstacle to possible deployment. During the investigation in to the relevant literature, it also became apparent that a number of papers were concerned with comparing Linux to its contemporaries, in the main the comparisons were with Microsoft’s Windows operating system. One several strengths of Windows on the desktop is its prevalence for playing computer games. Linux has overall failed to dent the computer games market – primarily due to the inability to support Microsoft’s DirectX graphics API and audio driver issues (Macedonia, 2001). At the time of writing there is still no native DirectX support on Linux. Not only businesses and organisations were investigating potential systems migrations to the Linux platform - governments worldwide began feasibility studies with the serious intent migrating away from proprietary platforms. These included both the German and United States governments, despite Microsoft’s best attempts to propagate the notion that open source operating systems were inherently insecure compared to their own (Hilley, 2002). This movement began to gain further momentum as pointed out by Bean et al, (2004) that major computer industry players like Hewlett Packard and IBM were heavily marketing their Linux based hardware – with IBM even using their own employees as field testers for Linux (on the desktop) to ascertain its impact on worker productivity. Not taking this lightly, Microsoft began an aggressive campaign of litigation in order to maintain the status quo of their monopoly (Chaudri and Patja, 2004). However, there were still several advantages to using Windows over Linux. One of those advantages related to hardware support (Tsegaye and Foss, 2004). They stated that ideally the design of device drivers should reduce the necessity for end user interaction in order to allow the full functionality of the device in question. Windows handles this rather better than Linux, especially when it comes to plug and
  • 26. 26 play operability. For an end user on a desktop, this kind of ease of use is, to say the least, rather important. The year 2005 was talked about as the year that Linux would finally break through into mainstream desktop use (Massey, 2005). The ecosystem of Linux had matured to a point that it could now be considered as being on par with its commercial rivals, with the question of whether one should move to open source software evolving in to how one would make the leap (Goth, 2005). These developments, and the changing attitudes towards Linux and open source software in general, led to Microsoft reducing the pricing of its software because of the availability of Linux, in order to remain competitive – as Linux could just be downloaded free of charge (Casadesus-Masanell and Ghemawat, 2006). Casadesus-Masanell and Ghemawat also proposed the idea that software piracy of Windows has a detrimental impact on the installed base of Linux (a sentiment also echoed by Kshetri, 2007). Following on, further comparisons were made between Linux and Windows in terms of security vulnerabilities. Dedeke (2009) wrote that whilst Linux has an overall perception of being more secure and therefore less vulnerable compared to Windows, his research that analysed both Red Hat Linux and Windows between 1997 and 2005 indicated that Red Hat had more reported vulnerabilities during that time span compared to Windows, and that it was a fallacy to assume Windows was inherently insecure compared to Linux. Wheras Salah et al (2013) warn that overall, most operating systems have flaws in terms of security. Dougherty and Schadt (2010) referred to the availability of applications on Linux (such as OpenOffice, Rhythmbox and Firefox) whose utility was equivalent to similar applications available on Windows (such as Microsoft Office, iTunes and Internet Explorer). They further elaborated on this, informing that whilst there were like for like applications for many usage case scenarios, making the choice to use
  • 27. 27 Linux did exclude the ability to use certain applications that may never be ported over to or made for Linux, and this consideration should not be taken lightly. Architecture-wise, Linux also has some hurdles to overcome. One such major concern centres around kernel upgrades. Knowing when to upgrade kernel versions, (and to which version), is a serious concern (Harji et al, 2011) (Harji et al, 2013). There are significant performance variances between different kernel versions, and without referring either to benchmarks that can be found online, or through testing on the job, it is difficult to know at what point to upgrade or not upgrade the kernel. Further to that, in the researcher’s own experience, a kernel upgrade can break graphics driver dependencies, rendering the GUI portion of Linux unusable, as the graphics drivers are compiled using whatever version of the kernel was available at the time, and would need to be recompiled using the newer kernel. Figure 4 - Output – a simple definition, a conceptual model (Dimensions)
  • 28. 28 Literature Gap What has been ascertained from the review of the literature is that from 2009 it has become much harder to find or uncover new research into Linux adoption on the desktop. This could be due to changing patterns of computing as discussed by Dukan et al (2014) when it is noted that traditional PC based desktop computing is becoming less relevant in an era of portability driven on by low power consumption processors used in mobile/tablet devices, low power sensor networks and the lightweight operating systems based on the Linux kernel that power them – or the reasons established earlier in 2009 are still the same. It is also possible that Linux (driven on by competing juggernauts such as HP, Dell and IBM) has focused on its core success areas such as server centric application use. As no further research has been located after this subsequent gap, this paper intends to cover this period. Furthermore, these studies are focused primarily on server side technology, of which Linux has gained widespread acceptance at the time of writing (see introduction for statistics). It has been demonstrated that a clear pattern has emerged during the ‘adoption’ portion of the literature review that the decision to adopt is generally influenced by the weighing up of the shortage of skills pitted against a saving of costs on software, plus the ability to reuse older hardware (Maddox and Puttnam, 1999) (Ven, et al, 2008) (Ajila and Wu, 2007) (Giera and Brown, 2004) (Decrem, 2004). As has been demonstrated earlier, whilst Linux does have many comparatively similar applications versus its Windows nemesis, there is still a lack of applications overall. Issues with device drivers and hardware support also appear to be a relevant issue when considering which of the two to choose from. When it comes to security, there is some conjecture as to which is the more secure operating system – but with both Windows and Linux, it would depend on the attack footprint of any given specific system, making it more difficult to argue either way, although data provided by Dedeke (2009) leans towards Linux being the more insecure operating system.
  • 29. 29 It is therefore apt to revisit the research topic again due to Linux’s widespread adoption on other platforms, such as servers and portable devices. It stands to reason that a lack of skills (for support or otherwise) has not hindered its advance on other platforms, so there must be other substantive reasons that have influenced Linux’s small segment of captured desktop market share when compared to Linux’s other already mentioned successful platform penetration.
  • 30. 30 Chapter 3 An Experimental Comparison of Linux and Windows “…When we design and architect a server, we don't design it for Windows or Linux, we design it for both. We don't really care, as long as we're selling the one the customer wants. If a server goes down the production line, it doesn't really know what OS it has on it…” (Michael Dell, Interview with PC Magazine 3 February 2004 (Miller, 2004) One of the key contentions of this paper is that Linux, from a functionality and performance perspective, is comparable to Windows, and therefore should be discounted as a reason for its lack of adoption. Whilst it is apparent that they do not share the same lineage, it is assumed that the performance of Linux is not a reason behind its lack of desktop market share. According to research performed by Dederick and West (2003 and 2004) one respondent said “(users) don’t know, (and) don’t care (about the operating system in use)” as long as a user is able to adequately perform the tasks they want to perform. In order to prove or disprove the aforementioned assumption, an experiment was undertaken between 14 to 17 July 2016, to compare Windows 10 Professional, and Linux Fedora Workstation 24. Fedora was specifically chosen as this is the Linux distribution of choice used by Linus Torvalds (Torvalds, 2014). Various measurement metrics were defined, and are elaborated upon in the experimental procedure section that follows. Originally, FreeBSD had also been considered as an operating system candidate for the experiment, but as a GUI has to be separately installed, it was decided to withdraw FreeBSD due to time constraints, as its withdrawal would not have a material impact on the research focus of this paper.
  • 31. 31 Experimental Procedure A Lenovo X201 laptop (manufactured in 2010) was chosen. As identified in the literature review section, Auger (2004), Kshetri (2004), Decrem (2004) and Kirby (2000) had written that the longevity of older hardware can be extended if the hardware was repurposed by having Linux installed as its operating system. Therefore, the experiments would also be a logical extension to the existing body of research work. The hardware used was as follows:  Processor – Intel Core i5 M540 2.53Ghz  8GB RAM (DDR3-1066Mhz)  SanDisk Ultra Plus 256GB SSD Hard Drive  9 cell battery  12.1” WXGA LED Display  On-board Intel HD Graphics Adapter  External LG GP30NB30 Slim Portable DVD Drive  SanDisk Cruzer Blade 8GB USB Drive  Sony Xperia Z3+ Smartphone (for timer measurements) The experiment was broken down into five broad areas:  Installation  Start-up / Shutdown  I/O intensive operations  Processor intensive operations  Power management
  • 32. 32 Stage 1 - Installation Prior to each installation, the SanDisk Ultra Plus 256GB SSD Hard Drive used for the experiment had all partitions deleted, so that it would present itself to the operating system installer as a new empty drive. During the installation, default automatic drive partitioning was selected on both Windows 10 and Fedora 24. All default installation options were chosen, and one unique user entitled “unitest” was created, without a password. The number of unique interactions – such as pointing device clicks, or keyboard entries used to define a username, were noted, as well as the number of reboots required to arrive at a working desktop, and the time taken to complete the installation. Both operating systems were installed using an external LG DVD drive as the laptop did not have an on-board optical drive. Once the installation had been completed, both operating systems were updated to the most current patch levels available from their respective providers. The time to install updates was not measured, as the media used for Windows 10 was issued in late 2015, whereas the Fedora 24 media was downloaded on the first day of the experiment (14 July 2016), and would not therefore present data that would be comparable. Stage 2 – Start Up / Shutdown In order to test both start up and shutdown performance, several timing measurements were recorded:  The time taken to start up the laptop from the powered off state to the user desktop  The time taken to completely shut down the laptop from the user desktop to the powered off state  The time taken to hibernate the laptop from the user desktop to sleep mode
  • 33. 33  The time taken to wake the laptop from sleep mode back to the user desktop Stage 3 – I/O Intensive Operations Tests were undertaken to measure I/O performance of the two operating systems using two types of files:  Small files – 106 files of varying file types and sizes - total 326MB  Large file – 1 Matroska Multimedia Container file (.MKV) containing a 1080p Blu Ray rip of a film – total 3.88GB In both cases, the small files, and the large file were subjected to three file move operations, and the time taken to do so on each operating system was recorded:  Hard drive to hard drive  Hard drive to USB drive  USB drive to hard drive Stage 4 – Processor Intensive Operations Three sets of processor intensive tests were carried out. In the first test, both operating systems had the 64bit version of Handbrake Open Source Video Transcoder installed and the Matroska Multimedia Container file from the large file experiment in stage 3 was converted from MKV format to MPEG-4 format (using the default Normal setting) and the duration taken to convert was recorded.
  • 34. 34 Secondly, Geekbench processor benchmarking software was installed (only the 32-bit version, as a license must be purchased to use the 64-bit version), and Geekbench benchmark scores were calculated by the software and noted down. Finally, WinRAR (for Windows) and RAR (for Linux) – (both 64 bit versions) were used to compress the Matroska Multimedia Container file using the highest level of compression possible (setting entitled Best). Stage 5 – Power Management To measure the effectiveness of the power management of both operating systems, the MPEG-4 video file created during stage 4, was played on a consecutive loop using VLC Media Player, from a fully charged battery state, until the 9 cell battery was completely discharged and the operating system initiated a shutdown, and reached that state. Presentation of Results Windows 10 Professional x64, Build 10586.494, Version 1511 Linux Fedora 24 Workstation x64, Kernel 4.6.3-300.fc24 Stage 1 – Installation Installation time 24 minutes, 41 seconds 20 minutes, 39 seconds Number of clicks/interactions 16 15 Reboots required 3 1 Stage 2 – Start Up / Shutdown Start Up Time 18.35 seconds 21.10 seconds
  • 35. 35 Shut Down Time 9.59 seconds 6.22 seconds Hibernate Time 3.66 seconds 2.09 seconds Wake from Sleep Time 2.17 seconds 2.09 seconds Stage 3 – I/O Intensive Operations Small files – Hard drive to hard drive 7.0 seconds 3.1 seconds Small files – Hard drive to USB drive 56.8 seconds 32.9 seconds Small files – USB drive to hard drive 56.5 seconds 16.4 seconds Large file – Hard drive to hard drive 28.50 seconds 26.00 seconds Large file – Hard drive to USB drive 10 minutes 2.3 seconds 8 minutes 52.1 seconds Large file – USB drive to hard drive 3 minutes 13.1 seconds 2 minutes 44.7 seconds Stage 4 – Processor Intensive Operations Geekbench 32 bit single core benchmark 1966 2028 Geekbench 32 bit multi core benchmark 4043 4068 WinRAR/RAR 5.4 x64 compress MKV file on maximum compression setting 8 minutes 56 seconds 9 minutes 50 seconds Handbrake conversion of MKV file using normal setting 1 hour 41 minutes 0 seconds 1 hour 48 minutes 19 seconds Stage 5 – Power Management Playback time of MPEG-4 file until battery discharged from full 4 hours 40 minutes 45 seconds 3 hours 49 minutes 58 seconds Table 7 – Results of the 5 experimental stages
  • 36. 36 Discussion of Results The installation of Fedora completed just over 4 minutes faster than Windows. It may have also been possible to install Fedora in less time, as the install media booted first to a live desktop, and then provided the option to install the operating system. During the initial boot from the optical media, an option to directly enter the installation program was presented, but the keypress to initiate it did not register and the live desktop proceeded to be booted. A probable reason for the quicker install for Fedora is that it is a distribution stripped of unnecessary software (although it included the Libre Office productivity suite, and several other potentially useful applications in the default installation parameters). Aside from the time taken to start up, Fedora was quicker to shut down, hibernate and wake from sleep. One reason for the slower start up time was that despite the unitest account being configured without a password, on Windows 10 the default behaviour is to directly boot to the desktop without further interaction, whereas Fedora requires the username to be clicked/selected from the login page before the GNOME desktop starts up – however this does not fully account for, or explain, the (almost) 3 seconds disparity between the two. Fedora performed significantly better than Windows did on all six of the I/O intensive tests carried out. Fedora uses the EXT4 file system versus Microsoft’s NTFS. The performance results are corroborated by research undertaken by Safee and Voknesh (no date) who stated that generally file operations of a sequential nature perform more poorly on Windows compared to Linux. During stage 4 (processor intensive operations), Windows performed much better than Fedora in both tests undertaken by the researcher (RAR and Handbrake conversion – both using x64 binaries). Interestingly, Geekbench (albeit benchmarked on 32 bit operations due to licensing restrictions) performed better on Fedora than Windows. It is entirely possible that Fedora has been optimised to perform better on benchmarking software – not an entirely unheard of phenomenon (Cai et al, 1998),
  • 37. 37 or perhaps it performs better using 32-bit processor operations – and if that is the case, at the time of writing, most new desktops are shipping with 64 bit operating systems and applications and therefore should be optimised for the same. Whatever the cause for the Geekbench results, the real world tests measured during the experiment show that Windows was far better in this regard. The final phase, designed to test power management, yielded a startling disparity. Windows, whilst playing the same video file, using the same media player (VLC), lasted just over 50 minutes longer than Fedora. In both cases, neither operating system used any third-party drivers to optimise power settings or consumption, therefore out of the box Windows was demonstrated to be better than Fedora in this regard. As stated at the start of this section, part of the working assumption is that the performance of Linux should not be a reason for its lack of desktop market share. Based on the results, it can be argued that Fedora performed better than Windows in some cases (I/O, Start up/ shutdown) as well as being argued that it is deficient versus Windows in other cases (Processor intensive operations and power management). Taking a balanced approach between the two, the research indicates that the operating systems were overall comparable (albeit depending on the usage case scenario), thus proving to a satisfactory extent that Linux is similar to Windows from a performance angle, as per one of the tenets the working hypothesis.
  • 38. 38 Chapter 4 Interviews with IT Professionals “… A research method is a strategy of enquiry which moves from the underlying philosophical assumption to the research design and data collection…” (Myers and Avison, 2002) In the previous chapter, it was established through experimentation that Linux is comparable to Windows overall, from a functionality and performance perspective, and therefore (lack of) functionality and/or performance can be discounted as a reason for its lack of adoption. Further research was therefore required, in order to ascertain and establish what the reasons are for the lack of Linux’s penetration in the desktop operating system market space. Therefore, qualitative interviews were undertaken with 5 IT professionals, with a cumulative experience of 96 years, working in high pressure business environments tasked with the responsibility of evaluating, purchasing, maintaining and monitoring thousands of desktops and servers between them over the course of their careers. In the problem statement earlier in this paper, the working hypothesis states several reasons that postulates why Linux has not gained traction in the desktop space. Those tenets of the working hypothesis are restated again below for the benefit of the reader:  It (Linux) is not preinstalled on new PCs that are sold  There are too many Linux distributions available, which has led to fragmentation  Different package managers are used by different distributions  Multiple desktop GUI environment choices  A perceived lack of user friendliness and a steep learning curve
  • 39. 39  Deficiencies in hardware support, especially for graphics adapters  Paucity of available software/native versions of popular applications In order to prove or disprove the above statements, the aforementioned information technology specialists were selected and interviewed because of their exposure over many years to a variety of operating systems, the fact that they have and are working in diverse industries, based in different geographical areas, and were likely to understand technical complexities and challenges that a layman may not. It was believed that such candidates would know much better the reasons for Linux’s failure on the desktop than a layman. For the purposes of this qualitative research, the interviews would be based upon the principle of ‘phenomenology’ (Husserl, 1970). Phenomenology is a method which encourages a respondent to provide information that is based upon his/her subjective perception of a particular situation. Questions are asked with the specific intent that the respondent will provide descriptive responses to the questions posed to them, devoid of the motivations (or assumptions) of the interviewer – thus allowing for insights into the behaviour, motivations and actions of the interview subject that are not influenced by the researcher. Several potential interview candidates that were approached had requested that the questions to be posed would be provided in advance. The researcher made every effort to avoid providing the questions, so that pre-preparation would be avoided, with the specific intention that the research question would not be revealed – as having advance sight of the questions could allow the subject to extrapolate the motivations, assumptions and actions of the interviewer and therefore rendering the ‘phenomenology’ method of interviewing null and void. It was intended that the interviews would provide data that either supports, or does not support the various tenets of the working hypothesis restated above, and that qualitative interviews would best
  • 40. 40 provide the required insight to answer the research question put forward, in comparison to quantitative research methods that could have alternatively been undertaken. The overall question framework for the qualitative interviews was created based upon the working hypothesis, and other points of interest that were raised during the literature review phase of this paper. In total, up to 29 open ended questions (refer to Table 8) were to be put to the interviewee, in order to ascertain as much data/information as possible from the interview. However, the set of questions were seen more as a guideline framework, and would not (nor in fact could not) be rigidly followed as certain answers elicited during the interview may (and in fact did) inform questions that would have been asked later on within the question structure. The overall structure of the questions started with a more generalised line of enquiry, such as establishing the respondent’s career history, and the general changes to operating systems that they have observed over many years. The reasoning behind this was to engage the subject in conversation, opening up about themselves, whilst actually narrowing down the scope of enquiry with each subsequent question to specific areas of interest. Questions Rationale 1. How old are you? 2. Male or Female? Ascertain demographical information of the interviewee. 3. How long have you worked in IT for? 4. Can you describe your career from its start to now? 5. Please discuss the technological changes that have occurred during your working career thus far? Gain an understanding / general overview of the information technology specialist’s employment background and history, and how information technology has changed during the course of their career. 6. Going through your career, can you discuss the operating systems you have used in a personal capacity, that your employer(s) have used, and how that may have changed over the years. 7. What about on portable devices? Please discuss your experience over the years with those devices, and Learn about the background and opinions of the IT specialist’s experience with various operating systems including portable devices, both at work and personally. Also ascertain if they are aware that these devices are
  • 41. 41 how they have changed from an operating system standpoint. 8. If answer to 7 does not elicit from interviewee that Android uses Linux or iOS uses BSD inform interviewee and ask their opinion on that. primarily using BSD Unix or Linux to run. 9. Desktop PC sales are supposedly on the wane, discuss, and are they relevant anymore – and why? 10. During your career, have you ever installed an operating system on a desktop or laptop, and if so what was it? If not Linux based, why not? Diving deeper into desktop related topics. Could lack of penetration be down to the desktop being less relevant in an era of mobile devices? Also understand operating system installations undertaken by the respondent 11. What has been your exposure to Unix and Linux? 12. What do you think (or know) about Linux in general? Start narrowing the questioning down to Linux specifically, initially from an open ended standpoint. 13. Linux distributions, which ones are you aware of? 14. Are you aware there are currently 815 unique distributions? What do you think about that? 15. Discuss your experiences with the Linux distributions you are aware of. 16. If the person has in depth experience, ask about preferred GUIs 17. If the person has in depth experience, talk about package management for different distributions. Focus on distribution related topics (as well as their respective GUIs and package managers if possible). 18. Is Linux easy to use? Why? 19. In some circles Linux is viewed as difficult to use and needs substantial training time and effort to be invested. What do you think about that statement? Opinions on Linux’s ease of use. 20. Why do you think Linux is rarely preinstalled on a new desktop or laptop? Try and understand the view of the interviewee about why Linux is not preinstalled by manufacturers. 21. In your opinion and experience, discuss hardware support with Linux 22. Do you think Linux performs well on obsolete hardware? If yes, do you use it on obsolete hardware? Is the support of hardware (like graphics adapters) an impediment to adoption in the minds of the respondent? Do they believe it works well on old hardware? Ascertain if software piracy is a reason for lack of adoption.
  • 42. 42 23. If not used on obsolete hardware why not? If because using Windows, try and elicit if using pirated version. 24. If 23 doesn’t answer that – ask if they think that software piracy has an effect on the user base of Linux on the desktop 25. Do you think it is possible to do everything on a Linux desktop that one can do on a Windows desktop? Why? 26. If lack of applications is not cited, ask what the respondent feels about availability of applications on Linux versus other platforms. 27. With the prevalence of cloud and webapps would this no longer be an issue (if believe lack of apps is an issue)? Test the understanding of applications on the Linux desktop, and see if its viewed as a reason for lack of adoption. 28. Do users care what operating system runs on their desktop or laptop? Why? 29. If you could setup a network of workstations from scratch with a limited budget would you consider Linux? Why? See what the general opinion is based on their perceptions of their users. As well as see if Linux would be used to save costs on software licensing, or there is just a bias in general against using it. Table 8 - Qualitative Interviews - Definitions and Measurements Interview Procedure The interviews were conducted between 26 July 2016 and 1 August 2016 and were recorded if future inspection was required to interrogate the veracity of the research undertaken. Additionally, transcripts of the interviews performed were written up and they form Appendix B of this paper.
  • 43. 43 The respondents were advised that their name and name of any employer (both past and present) would not be published, to encourage openness and to build trust between the interviewer and the interviewee, unless if they expressly requested that is was to be published. Furthermore, those interviewed were advised beforehand that the interview would be regarding their knowledge of operating systems – rather than specifically on Linux, to elicit as much data as possible, even if some was not relevant to the purpose of this paper, and to avoid pre-preparation on their part. Interestingly, all of those interviewed waived their right to anonymity and were happy for their real names, as well as names of organisations (if provided) to be published. The respondents were also advised that they would be offered a copy of the completed thesis paper, once submitted to the university, to ensure that the process on the part of the interviewer was transparent and that their answers were published exactly as they had answered them – and also because once the end of the questions had been reached, the interview subjects all wanted to know the result(s) of the research. Discussion of Results First of all, only one respondent was able to authoritatively answer question 16 about the different GUI options available on Linux. Secondly, question 17, regarding different package managers was only asked in one interview, as it was felt that it would take something away from the flow of the conversation, as well as that the respondents appeared unlikely to be able to answer the question. The researcher felt that by not asking, it would not detract from the interviews, as questions were still asked about distributions and software in general. As a result of the inability to have full answers for questions 16 and 17, two parts of the working hypothesis were not able to be proved or disproved, and for the balance of this paper would be dropped.
  • 44. 44 Those two parts to be dropped were:  Different package managers are used by different distributions  Multiple desktop GUI environment choices Overall, each of the five respondents noted a similar path of progression with their experiences with operating systems. This path generally followed a DOS -> Windows 3.1 (or 3.11) -> Windows 95 and so on and so forth pattern. “…we were using this traditional operating system called MS-DOS 3.0 and then the evolution of the graphical applications with Windows 3.1 was an amazing thing in front of us. And then say going to the development of this OS by Microsoft of Windows 95, 98, and the other things. It made a revolutionary change…” “…Operating systems – mainly Win systems all the way from Win something and then Win NT and 95 and above…” “…So I started off personally using DOS, it used to be DR-DOS, then MS-DOS, so then you had Windows 95, as its own operating system. Then I’ve used Windows NT, XP, Windows 2000…” All of the respondents have had exposure to Linux, but to varying degrees. In two cases, the first exposure came about from free CD media provided on the cover of computer magazines, which at the time (the mid to late 1990s) when there was less widespread Internet penetration, appeared to be vital enabler in the spread of knowledge and information to IT literate (or those that wanted to become IT literature) individuals. “..We used to get CDs with magazines and the CDs used to contain a lot of software. So it was through that that I came to know about Red Hat and Suse…”
  • 45. 45 “…they were popular (computer magazines) at the time before the internet and they used to come with a CD stuck to the front with some software for you try … And that was way to find out new things and try out new things and on one edition there was a full version of Linux to install…” Most of the respondents agreed that the desktop as a platform was gradually becoming less relevant in an age of portability. It was agreed that in certain cases where large screens and other high end hardware was required for very specific tasks, there would still be a place though for such hardware. The respondents almost overwhelmingly pointed towards the paradigm shift towards mobility – an area which all respondents were aware is dominated by Android (Linux kernel) and iOS (BSD Unix and Mach kernel). “…the desktop environment is slowly getting phased out and it is getting into a different type of working environment…” “..some of the users require large amounts of storage, where expansion is required, additional expansion – like those who have high end graphics requirements…the desktop will not phase out from the market, or for the end user completely … The difference is the demand will not be the same as before…” “…in another 5 years the complete, complete, computing platform will be changed with this portable equipment…” “…we are using more portable devices like laptops, tablets, phablets and since the UI is more web based the need for desktop PCs as such is not really there. Especially desktop PCs are not really needed when we do not have a need for severe client resources like the old systems used to…”
  • 46. 46 “…there still is a place for that (the desktop) – one, the computing power and two, the form factor - sometimes you do need to sit at a desk, have a full sized keyboard, a full sized monitor and have a mouse for input, the ability to use those peripherals to do your job…” “…I think desktops are becoming less relevant now…so as we going into this mobile era especially, portable devices, laptops, there’s a bit of a market but we can see things declining there. Desktops are losing their market share for sure. I mean people want mobility. There may be specific functions, maybe something high end workstations where you are doing some sort of engineering or drawing – things like that, which require a lot more resources and necessitate desktops. But I think people are shifting more towards just getting their work done…” During the literature review, there was some evidence of software piracy being responsible for Linux’s reduced desktop market share (Casadesus-Masanell and Ghemawat,, 2006, and Kshetri, 2007). However, this was contradicted by the interviews undertaken, where this notion was dismissed as being a major contributing factor. It should be pointed out that the literature referred to dates back almost 10 years from when this research has been undertaken, and may have been more relevant previously. “…I don’t think it’s down to piracy. I think it’s down to what people are already familiar with and what they have…” “…under those circumstances it shouldn’t really be too much of a piracy issue for Linux…” “…nowadays nobody is really using a pirated operating system…It’s my opinion, nobody will be looking for any pirated software…” What was a point of interest is that it was a generally accepted opinion that an end user does not care what operating system is running on their device – whether it is a desktop, laptop, or a portable device. This corroborates earlier research identified during the literature review (Dedrick and West, 2003, and
  • 47. 47 Dedrick and West, 2004). It was clearly established that an end user has a set of generally repetitive tasks to undertake (or perhaps overall repetitive patterns of use) – be it for work, or for leisure – and they expect to be able to accomplish those tasks – irrespective of the underlying operating system. “…No, they just want to be familiar, they just want to get their job done, they just want to be able to do it…” “…Say, for performing the task – how much, how quickly can they do it, how easily can they do it. That is a factor which the user will consider when choosing the OS (to use)…” “…From a personal use (perspective) I don’t think so. From a business use I think whatever makes them more comfortable as long as they can deliver…” “…unless if I have specified it to them (the users), they would not know what is the operating system (in use)...nobody is getting into the operating system core capabilities. Their experience on functionality is based on core application level experience not on the operating system…” Now, referring to the previous paragraph, one of the tenets of the working hypothesis was that the lack of available software/native versions of popular applications is a contributing factor when answering the research question. What became apparent from the discussions is that there is one key suite of applications that is missing from Linux distributions – which is Microsoft Office. Whilst there are alternatives available, it would appear reading between the lines that this makes no difference to the perception of users. “…they expect to find a piece of software just there available, such as Word, Excel, their Outlook…” “…maybe that is because having used Windows systems for so long, but I feel much more comfortable working with Windows Excel than Google Excel (Sheets)…”
  • 48. 48 “…one of the challenges is that most of the applications, say around 75% of the applications are available or programmed for Windows…Applications availability is very poor under Linux, the Linux platform…” “…So they start off talking about ok what are the common applications we use, so say Word, Excel… Can I use Word and Excel? Some of the features are not as exactly the same as apples for apples and I think that’s the issue…I think the question they would ask if about applications – Can I do this? Can I do that? Can I use Word and Excel? For me it’s a bit about the compatibility of the other applications – Windows has that edge over the others...” “…It depends upon the compatibility. Some of the applications, the compatibility…” “…there are a lot of functions that are missing from that, that are only available in the Windows version right…But from a user perspective, I think the applications are quite limited. So you have your own set of applications, I think Linux has it, but OS X has its own version of a word processor, or a spreadsheet, things like that – but functionality wise it’s not up to the mark as some of the Windows Office suite applications are…” During the interviews, the respondents were in overall agreement that the reason why Linux is not preinstalled often on new PCs sold is primarily due to familiarity (or rather lack of in Linux’s case) to the end user. This feeds into the operating systems experience that was established in most of the interviews, where the respondents themselves followed a path of progression, discussed earlier in this section. This can therefore be argued to be the case for the general user populace, who are exposed to Windows from an early age, usually at school, so they would naturally gravitate towards what they already know. In addition to this, Microsoft’s deals with OEM manufacturers to preload their operating system is also a contributing factor, but not the substantive reason.
  • 49. 49 “…There’s few people who are familiar with it, so the level or knowledge in your typical family, whereas they’d know Windows already…” “…People are taught Windows at school…” “…I would put it under something called an oligopoly, which is something practiced by Microsoft. So, once they have captured the market, 90 plus, 95% plus of the market, then they can pretty much dictate or collude with various manufactures to ensure that their systems get on board…” “…well the consumer market has not accepted Linux, mass consumers have not adapted to the Linux environment. Every user has adopted the Windows environment. If anyone buys a laptop, anyone would go for only a Windows operating system. Even with consumers, any business that is selling in the market they would rather sell the Windows environment than a Linux preinstalled piece of hardware, unless of course it’s a mobile…” “…I don’t know if there’s some sort of OEM contract in place or something like that, but one guess I would have to make would come down to user preference – what’s the most popular OS that they are used to…for the masses if you look at it everyone’s most familiar with or aware of is Windows – which I think is what sells. So someone’s going out there to buy a laptop and they come with Linux installed I don’t think they have such a big market share…” When those interviewed were questioned about their knowledge of Linux distributions, the answers tended to circle upon Red Hat, Suse and Ubuntu – this tallies well with the keywords established from the content analysis undertaken during the literature review section (refer to Figure 2). None of the respondents were aware of the vast number of distributions available, so the fragmentation of distributions can be disregarded as an influencing factor, as those interviewed were not aware of them, so could therefore not be influenced by what they are not aware of.
  • 50. 50 “…If someone told me that there was 200 I’d have thought well possibly, but 800 sounds like quite a lot…” “…Because it is an open platform, anybody will be able to use their ideas and develop their own OS. This is actually adding more value and power to this particular OS because the contribution from multiple people and they have the liberty to take their own ideas into this OS – and that’s the reason why so many versions have been developed…” “…Wow. It’s almost like, it feels like a fragmented market…” “…but I am surprised to see that 800 variants or different flavours (exist)…” “…I knew there were a lot, but I didn’t expect it to be that many. Definitely over 100 but that’s amazing. I think it’s a good and bad thing…but in terms of a regular user I think they would find it difficult if there isn’t a common standard across these distributions. To me it’s a good and bad thing. Each person has a flavour for what they want, or want to try. So they have many options, but in terms of standardisation and people having to keep track of different commands and different ways to do things, that could be a downside to it…” From a hardware support standpoint, those interviewed generally believed that hardware support with Linux was adequate. The general consensus is that Linux runs well on hardware with differing levels of computational power and/or age. However, particular reference was made twice to graphics adapter support, which was part of the working hypothesis. Therefore, this tenet has been proved to be correct, although it is not considered to be the major contributing factor to the lack of market share, it is just part of the reason. “…Linux does have its deficiencies on the desktop - I’d say mainly down to graphics…”
  • 51. 51 “…compatibility is one of the challenges we face both with Linux and Solaris. Some of the devices are not recognised, and the drivers are not available and the functionality is restricted…so that way there are huge challenges when it is coming to this OS…” “…Yes, the hardware was problematic. The drivers especially. You had to look for these compatible drivers. It wasn’t plug and play, so everything at that time I had to try and download several drivers to find one that would work. It was problematic… I think mostly it was printers, network cards, I think graphic cards…” In the main, it was also established that there is a substantial learning curve when adopting Linux, as well as user-friendliness concerns. However, as several of those interviewed pointed out, this is most likely as a result of many years of user exposure to Windows. This learning curve was also cited by one respondent when referring to Apple’s OS X operating system, that well known user commands such as the right-click are not there – this being something that Windows users have been used to stretching back to the early 1990s with their exposure to Windows 3.1/3.11 onwards. Whilst on the face of it, such matters may seem trivial, but they are not when a user just wants to perform his or her particular patterns of use. Therefore, this part of the working hypothesis is also considered to be proved. “…So, the problem is the dominance of Windows has been there for so long that it becomes so familiar when using the system. Just things like right click which on the Mac is a little different and people find that difficult, so why I said I don’t think they will be widespread adoption is that people are so familiar with the shortcuts and how to navigate through, I think that has an influence on their decision…” “…people like us who are brought up on Windows - we know Windows inside out, and then move to another operating system have to learn everything again…” “…On top of that, once the users are thoroughly trained, then they, there is reluctance on their part, on their side to want to learn or migrate to something else…”
  • 52. 52 “…what I would say is that the application that is extensively used is press the button, wait for the operating system to load. During that time, they must be looking around, looking at the phone, having some coffee or something, they don’t care how it comes up…other than that I don’t think that anyone is really noting what is an operating system. Back then they didn’t notice what was the operating system and now also they are not knowing that…” “…I would say that’s fairly accurate. Especially to a person, coming from my background…setting up the Squid proxy, it did take some time to pick it up so there is some training, even though it was self- learning. But if you are planning to deploy this, you know say in an office place you would need some training to get used to it...Over the years, just because they are so use to one OS it could be down to that…”
  • 53. 53 Chapter 5 Conclusion “…The desktop hasn't really taken over the world like Linux has in many other areas, but just looking at my own use, my desktop looks so much better than I ever could have imagined.…” (Linus Torvalds, speaking at the Embedded Linux Conference, 2016) (Bhartiya, 2016) The research question for this paper is “Why has Linux, despite its popularity on many platforms, failed to be successful on the desktop?” To the satisfaction of the researcher, the two pronged research has answered that question – it is almost completely due to the lack of popular desktop applications. On the most popular desktop operating system platform (Microsoft Windows) it is Microsoft’s Office suite is what one could term “the killer app”. The idea of a platform either succeeding or failing based on the notion of a killer app was also raised by West and Mace (2010), when they discussed the runaway success of the iPhone. In that particular case the killer app was the Safari web browser because it could readily access and take advantage of the estimated 1 trillion web pages available at no cost to users with desktop browsers, in an era when mobile operators still operated a ‘walled garden’ of services – offering their own selective content whilst charging their customers an additional subscription cost to access that content. Linux’s lack of killer app on the desktop, and its overall lack of third party applications is considered by the researcher to be the primary reason for its failure to succeed in the desktop market based upon the findings of the research undertaken. This issue was discussed in the literature review section – citing papers from the early 2000s (Dedrick and West, 2003, Dedrick and West, 2004, Kshetri, 2004 and
  • 54. 54 Decrem, 2004) and clearly nothing has changed in the intermediate years between then and the time of writing, as evidenced by the data collected during the qualitative research. The lack of third party applications has also been responsible for the failure of both Blackberry’s BB10 operating system (Reilly, 2016 and Spence, 2013) and Microsoft’s Windows Phone operating system (Warren, 2015 and Thurrott, 2016) platforms – so this contention is backed by compelling real world evidence. Specifically in the case of Blackberry, the operating system kernel was not versatile enough to be successful on other platforms – whereas with Windows Phone, there is still an opportunity due to Microsoft’s CEO Satya Nadella continuum (phone as a PC) strategy for Windows Phone stating “…three years from now, I hope that people will look and say, ‘Oh wow, that’s right, this is a phone that can also be a PC’…” (Thurrott, 2016). Ubuntu is also working on a similar approach with its Unity 8 UI that aims to converge both desktop and portable devices (Wallen, 2016). The other main key reason for Linux’s desktop failure is that users in the general computing populace have become used to Windows, and have evolved with Windows as it has evolved – this point became readily apparent during the interview research undertaken. Microsoft gained its foothold on the desktop long before the Linux kernel matured into version 1.0 on 14 March 1994, when Windows 3.1 was released in 1992 (Gibbs, 2014). Windows 3.1 is still found in the wild, for example running the air traffic control system for Orly Airport in Paris, France (Waugh, 2015 and Whittaker, 2015). During the course of the interviews conducted, none of the respondents felt that Linux was technologically inferior when compared to Windows, or other desktop operating system environments. In most cases, those interviewed went on to praise Linux’s design and use of computational and memory resources. Those opinions are corroborated by the experimental research undertaken that reached the conclusion that overall (depending on the usage case scenario), both Windows 10 and Fedora 24 were generally comparable performance wise.
  • 55. 55 It is the opinion of the researcher that Linux has succeeded on other platforms because it was there at the beginning of those particular breakthroughs or advances in technology. This idea was substantiated by 2 papers that were uncovered during the literature review (West and Dedrick, 2001 and West and Dedrick, 2001) which discussed that often, new platforms become accepted when they are used in order to support and underpin new usage case scenarios and it was specifically pointed out that with Linux, its most common early usage cases were Internet centric – being used for web services, firewalls, security and other such similar services – because it was there to be adapted to those particular types of usage at the start of the prevalence of the Internet era. Similarly, when Google, as part of the Open Handset Alliance, began development in late 2007 of the Android operating system, with the Linux kernel at its heart (Industry Leaders Announce Open Platform for Mobile Devices, 2007) it was at the cusp of the portable computing era discussed by Dukan et al (2014) which was also established as a point during the literature review. Most of the interview respondents based on their own subjective experiences also discussed the very same matter when being questioned (refer to Discussion of Results portion of the Interview section). This convergence of computing and communications was prophesised in 1977 by Koji Kobayashi, who was the president of NEC, when he spoke of a time when both telecommunications and (presumably mobile) computing would converge as a result of eventual improvements to the design and technology of integrated circuits (Rumelt, 2011). As Dukan et al (2014) explained, this era of portability has been driven on by low power consumption processors that are used in mobile/tablet devices (dovetailing with Kobayashi), low power sensor networks and the lightweight operating systems based on the Linux kernel that power them – and this has now been extended to wearable devices such as smartwatches, as well as other IOT (or Internet of Things) devices. In almost every case, these devices are running a Linux kernel.
  • 56. 56 Even Microsoft has been forced to recognise that Linux is a major force in the operating systems market. Microsoft announced on 6 April 2016 as part of its Windows 10 Insider Preview Build 14316 (Aul, 2016) that users would be able to run Ubuntu’s BASH (Bourne Again Shell) natively on Windows. This was enabled by Microsoft and Ubuntu working together to implement WSL (or Windows Subsystem for Linux), allowing a user to run “…tens of thousands binary packages available in the Ubuntu archives (using Bash on Ubuntu on Windows) …” (Vaughan-Nichols, 2016) – so that developers would continue to use Windows. One interview respondents talked about Linux as a developer’s platform of choice during the interviews. In Appendix A, the theory of ‘Cumulative Selection’ (Dawkins, 1986) is discussed. Further credence was lent to the theory’s applicability to technological amelioration during one of the interviews undertaken – “…if you go and develop something, it makes sense to try and work off something which already exists, rather than try and create it from scratch. So you know it’s more (if) you’re going to start a new operating system and if something can give you a head start it would make sense to use that head start, so in a way it makes sense to use the work others have done already if it’s helpful to you…” Linux has been demonstrated to be a versatile, robust and adaptable operating system kernel. This versatility and adaptability to almost any type of usage scenario has allowed for its successful propagation across a multitude of platforms. In the case of the desktop, in the opinion of the researcher, it was 3 years too late when kernel version 1.0 was released in 1994 – Windows 3.1 had already taken hold and by 1994, when Linux was in a position to compete it was already too late and the opportunity had gone. Finally, due to the lack of a ‘killer app’, there was no compelling reason for all those existing Windows users to switch to Linux. So, in conclusion, Linux’s failure on the desktop cannot be reversed, but with the reducing relevancy of the desktop it is less of an issue, and now it is most likely to be other operating systems that will, in the next 5 years, be searching for relevancy and trying to catch up with Linux.
  • 57. 57 Appendix A - The history of Unix and Unix-like operating systems What is past is prologue. (William Shakespeare, Tempest 2.1.253) In order to better understand the current challenges faced by Linux when trying to make a breakthrough on the desktop, it is important to consider Linux first within a historical context. In this section it is contended that Linux is the logical culmination of a phenomenon known as ‘Cumulative Selection’. This is the concept that as a result of a sequence of non-random, cumulative steps, a complex end- product is derived from beginnings that were comparatively simple. (Dawkins, 1986) In 1440, the printing press was invented by Johannes Gutenburg. His invention was an aggregation of existing technology, which combined oil-based ink and screw presses that were used previously in order to produce wine and olive oil (Shenkar, 2010). Therefore, had those existing technologies not yet existed, Gutenburg obviously would not have been in a position to converge them together to create his new device, which one could argue would turn out to be the most important invention in the history of mankind. Further strengthening this train of thought, according to Curwen and Whalley (2014), technological amelioration usually advances via a series of generations (or part generations). They also point out that such amelioration is usually achieved through better hardware or software, or even the combining of both together. Using the aforementioned ideas of both cumulative selection and technological amelioration, this section hopes to successfully demonstrate and explain that Linux is an amalgam of all the useful