Felienne
Delft University of
Technology
Putting the science
in computer science
This slidedeck is about the science
in computer science
Felienne
Delft University of
Technology
Putting the science
in computer science
This slidedeck is about the science
in computer science
We all love science,
right?
We love ‘crazy’ science like this
http://the-toast.net/2014/02/06/linguist-explains-grammar-
doge-wow/
We love ‘crazy’ science like this
And this
http://allthingslinguistic.com/post/56280475132/i-can-has-
thesis-a-linguistic-analysis-of-lolspeak
What is science?
So, with science, we should be able
to answer questions about the
universe, like this one.
You’d expect computer scientists to
somewhat respectfully debate this.
You get
interpreted >>
compiled
JavaScript 4
ever!
Pure is the
only true path
C++ is for real
coders
PHP sucks
Pascal is very
elegant
You’d expect computer scientists to
somewhat respectfully debate this.
Unfortunately, reality is more like
this.
State of the
debate is not so
scientific
But, some people are trying! In this
slidedeck I’ll highlight some of the
interesting results those people
have found so far.
Researchers at Berkeley have
conducted a very exptensive
survey on programming language
factors.
http://www.eecs.berkeley.edu/~lmeyerov/projects/socioplt/
viz/index.html
Researchers at Berkeley have
conducted a very extensive survey
on programming language factors.
They collected 13.000 (!)
responses, and their entire dataset
is explorable online
http://www.eecs.berkeley.edu/~lmeyerov/projects/socioplt/
viz/index.html
They founds loads of interesting
facts, I really encourage you to
have a look at their OOPSLA ‘13
paper (Empirical Analysis of
Programming Language Adoption)
My favorite is this graph, factors
for choosing a particular language.
http://www.eecs.berkeley.edu/~lmeyerov/projects/socioplt/
viz/index.html
They founds loads of interesting
facts, I really encourage you to
have a look at their OOPSLA ‘13
paper (Empirical Analysis of
Programming Language Adoption)
My favorite is this graph, factors
for choosing a particular language.
Notice that the first factor that has
something to do with the language
is on the 6th place.
http://www.eecs.berkeley.edu/~lmeyerov/projects/socioplt/
viz/index.html
They founds loads of interesting
facts, I really encourage you to
have a look at their OOPSLA ‘13
paper (Empirical Analysis of
Programming Language Adoption)
My favorite is this graph, factors
for choosing a particular language.
Notice that the first factor that has
something to do with the language
is on the 6th place.
Two other languages are on the 8th
and 12th place.
http://www.eecs.berkeley.edu/~lmeyerov/projects/socioplt/
viz/index.html
What correlates most
with enjoyment?
Meyerovich and Rabkin also
looked into what makes
programmers happy.
Want to guess?
Meyerovich and Rabkin also
looked into what makes
programmers happy.
Want to guess?
It’s expressiveness. That correlates
with enjoyment most.
Could we measure it?
Meyerovich and Rabkin also
looked into what makes
programmers happy.
Want to guess?
It’s expressiveness. That correlates
with enjoyment most.
So what language is most
expressive? Is there a way to
measure this?
Danny Berkholz, researcher at RedMonk came up with a way to do this.
He compared commit sizes of different projects (from Ohloh, covering 7.5
million project-months)
His assumption is that a commit has more or less the same ‘value’ in
terms of functionality over different languages.
http://redmonk.com/dberkholz/2013/03/25/programming-languages-ranked-by-expressiveness/
In the graph, the thick line indicates the median, the box represents 25
and 75% of the values and the lines 10 and 90%.
http://redmonk.com/dberkholz/2013/03/25/programming-languages-ranked-by-expressiveness/
In the graph, the thick line indicates the median, the box represents 25
and 75% of the values and the lines 10 and 90%.
Let’s have a look at what language goes where!
http://redmonk.com/dberkholz/2013/03/25/programming-languages-ranked-by-expressiveness/
Loads of interesting things to see here. For instance, all the popular
languages (in red) are on the low end of expressiveness. This somehow
corroborates Meyerovich findings: while programmers enjoy
expressiveness, it seems not to be a factor for picking a language.
Loads of interesting things to see here. For instance, all the popular
languages (in red) are on the low end of expressiveness. This somehow
corroborates Meyerovich findings: while programmers enjoy
expressiveness, it seems not to be a factor for picking a language.
Interesting is also the huge difference between CoffeeScipt and
JavaScript, while this might be due to the fact that CoffeeScript is young
and commits are thus quite ‘clean’.
Loads of interesting things to see here. For instance, all the popular
languages (in red) are on the low end of expressiveness. This somehow
corroborates Meyerovich findings: while programmers enjoy
expressiveness, it seems not to be a factor for picking a language.
Interesting is also the huge difference between CoffeeScipt and
JavaScript, while this might be due to the fact that CoffeeScript is young
and commits are thus quite ‘clean’.
Finally, unsurprising, functional (Haskell, F#, Lisps) = expressiveness.
Another interesting fact came out
of Meyerovich and Rabkin’s study.
Could we measure it?
Stefan Hanenberg
Static versus
dynamic, does it
really matter?
Let’s experiment!
Stefan Hanenberg tried to measure
whether static typing has any
benefits over dynamic typing.
Two groups
Stefan Hanenberg tried to measure
whether static typing has any
benefits over dynamic typing.
He divided a group of students into
two groups, one with a type system
and one without, and had them
perform small maintainability
tasks.
Let’s summarize his results... Star
Wars style!
On the left, we have the lover of
dynamically typed stuff. He has gone
through some ‘type casts’ in his life
and he is sick of it!
On the right is his static opponent.
Let’s see how this turns out.
But what about
type casting?
One of the arguments that dynamic
proponents have, is that type casting is
annoying and time consuming.
Hanenberg’s experiment showed:
It does not really
matter
But what about
type casting?
One of the arguments that dynamic
proponents have, is that type casting is
annoying and time consuming.
Hanenberg’s experiment showed:
It does not really matter. For programs
over 10 LOC, you are not slower if you
have to do type casting.
I’m sure I can fix
type errors just as
quickly
I’m sure I can fix
type errors just as
quickly
Not even close
I’m sure I can fix
type errors just as
quickly
Not even close
The differences are HUGE!
Blue bar = Groovy, Green = Java
Vertical axis = time
I’m sure I can fix
type errors just as
quickly
Not even close
The differences are HUGE!
Blue bar = Groovy, Green = Java
Vertical axis = time
In some cases, the run-time errors
occurred at the same line where the
compiler found a type error.
But… my dynamically
typed APIs, they
must be quicker to
use
But… my dynamically
typed APIs, they
must be quicker to
use
No
I’ll document the
APIs!
I’ll document the
APIs!
Won’t
help!
Turns out, type names help more than
documentation.
I’ll use a better
IDE!
Won’t
help!
I’ll use a better
IDE!
Stefan Hanenberg
It looks like (Java-
like) static type
systems
really help in
development!
While more research is needed, we
might conclude this.
Do design patterns work?
Let’s tackle another one!
Walter Tichy
Let’s tackle another one!
Walter Tichy wanted to know
whether design patterns really help
development.
He started small, with a group of
students, testing whether giving
them info on design patterns
helped understandability.
Again two
groups, but
different
Let’s tackle another one!
Walter Tichy wanted to know
whether design patterns really help
development.
He started small, with a group of
students, testing whether giving
them info on design patterns
helped understandability.
Again, students were divided into
two groups, but setup was a bit
different.
There were two programs used (PH
and AOT) and some students got
with the version documentation
first and without second.
On different programs obviously,
otherwise the students would know
the patterns were there in the
second test.
Prechelt et al, 2002. Two controlled experiments assessing
the usefulness of design pattern documentation in
program maintenance. TSE 28(6): 595-606
The results clearly show that knowing a pattern is there, helps
performing maintenance tasks.
However, it was not entirely fair to measure time, as not all solutions
were correct. If you look at the best solutions, you see a clear difference
in favor of the documented version. Ticky updated the study design in
the next version, where only correct solutions were taken into account.
Again two
groups, but
again
different
The results clearly show that knowing a pattern is there, helps
performing maintenance tasks.
However, it was not entirely fair to measure time, as not all solutions
were correct. If you look at the best solutions, you see a clear difference
in favor of the documented version. Ticky updated the study design in
the next version, where only correct solutions were taken into account.
In this next version, professionals were used instead of students.
Furthermore, the setup was different. The same experiment was done
twice, first without participants knowing patterns. Then, they did a
course and after that again they did a test.
Again, results showed that version with patterns turned out to be easier
to modify.
Again, results showed that version with patterns tuned out to be easier to
modify.
For some patterns though (like Observer) the differences in pre- and
posttest were really big. For these patterns, the course made a big
difference. In other words: patterns only help if you understand them.
They work!
It has to do
with how
the human
brain
works
Long term
memory
Short term
memory
The human memory works a bit like a computer. Long term memory can
save stuff for a long time, but it is slow. Short term memory is quick, but
can only retain about 7 items.
Using patterns, you only use 1 slot “this is an observer pattern” rather
than multiple for “this class is notified when something happens in this
other class”
"The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information“, George Miller,
1956
They work!
Experiments
show it,
cognitive
science helps us
understand why
Does Linus’
Law’ hold?
Given enough
eyeballs, all bugs
are shallow
Does Linus’
Law’ hold?
Appearently, the law is not so
universal, we know that code
reviews are hard to do right for
larger pieces of software.
Not
impressed?
Appearently, the law is not so
universal, we know that code
reviews are hard to do right for
larger pieces of software.
Yeah, a tweet is not exactly
science. Don’t worry, I have some
proof too.
Researcher at Microsoft research
published a study in which they
connected the number of bugs in
the release of Vista (gathered
through bug report) with
organizational metrics, like the
number of people that worked on a
particular binary.
Nagappan et al, 2008. The Influence of Organizational
Structure On Software Quality: An Empirical Case Study,
ICSE 2008
They found that the opposite of
Linus’ law is true. The more people
work on a piece of code, the more
error-prone it is!
Nagappan et al, 2008. The Influence of Organizational
Structure On Software Quality: An Empirical Case Study,
ICSE 2008
More
touchers ->
more bugs
More tied to
bugs than
any code
metric
They found that the opposite of
Linus’ law is true. The more people
work on a piece of code, the more
error-prone it is!
These, and other, organizational
big are more tied to quality than
any other code metric!
Nagappan et al, 2008. The Influence of Organizational
Structure On Software Quality: An Empirical Case Study,
ICSE 2008
They found that the opposite of
Linus’ law is true. The more people
work on a piece of code, the more
error-prone it is!
These, and other, organizational
big are more tied to quality than
any other code metric!
This means that if you want to
predict future defects, the best you
can do is look at the company!
Might be me, but I think that is
surprising.
Nagappan et al, 2008. The Influence of Organizational
Structure On Software Quality: An Empirical Case Study,
ICSE 2008
Putting the science
in computer science
Felienne
Delft University of
Technology
That’s it! I hope you got a sense for
the usefullness of software
engineering research in practice.
If you want to keep up, follow my
blog where I regularly blog about
the newest SE research.

Putting the science in computer science

  • 1.
    Felienne Delft University of Technology Puttingthe science in computer science This slidedeck is about the science in computer science
  • 2.
    Felienne Delft University of Technology Puttingthe science in computer science This slidedeck is about the science in computer science
  • 3.
    We all lovescience, right?
  • 4.
    We love ‘crazy’science like this http://the-toast.net/2014/02/06/linguist-explains-grammar- doge-wow/
  • 5.
    We love ‘crazy’science like this And this http://allthingslinguistic.com/post/56280475132/i-can-has- thesis-a-linguistic-analysis-of-lolspeak
  • 6.
  • 7.
    So, with science,we should be able to answer questions about the universe, like this one.
  • 8.
    You’d expect computerscientists to somewhat respectfully debate this.
  • 9.
    You get interpreted >> compiled JavaScript4 ever! Pure is the only true path C++ is for real coders PHP sucks Pascal is very elegant You’d expect computer scientists to somewhat respectfully debate this. Unfortunately, reality is more like this.
  • 10.
    State of the debateis not so scientific
  • 11.
    But, some peopleare trying! In this slidedeck I’ll highlight some of the interesting results those people have found so far.
  • 12.
    Researchers at Berkeleyhave conducted a very exptensive survey on programming language factors. http://www.eecs.berkeley.edu/~lmeyerov/projects/socioplt/ viz/index.html
  • 13.
    Researchers at Berkeleyhave conducted a very extensive survey on programming language factors. They collected 13.000 (!) responses, and their entire dataset is explorable online http://www.eecs.berkeley.edu/~lmeyerov/projects/socioplt/ viz/index.html
  • 14.
    They founds loadsof interesting facts, I really encourage you to have a look at their OOPSLA ‘13 paper (Empirical Analysis of Programming Language Adoption) My favorite is this graph, factors for choosing a particular language. http://www.eecs.berkeley.edu/~lmeyerov/projects/socioplt/ viz/index.html
  • 15.
    They founds loadsof interesting facts, I really encourage you to have a look at their OOPSLA ‘13 paper (Empirical Analysis of Programming Language Adoption) My favorite is this graph, factors for choosing a particular language. Notice that the first factor that has something to do with the language is on the 6th place. http://www.eecs.berkeley.edu/~lmeyerov/projects/socioplt/ viz/index.html
  • 16.
    They founds loadsof interesting facts, I really encourage you to have a look at their OOPSLA ‘13 paper (Empirical Analysis of Programming Language Adoption) My favorite is this graph, factors for choosing a particular language. Notice that the first factor that has something to do with the language is on the 6th place. Two other languages are on the 8th and 12th place. http://www.eecs.berkeley.edu/~lmeyerov/projects/socioplt/ viz/index.html
  • 17.
    What correlates most withenjoyment? Meyerovich and Rabkin also looked into what makes programmers happy. Want to guess?
  • 18.
    Meyerovich and Rabkinalso looked into what makes programmers happy. Want to guess? It’s expressiveness. That correlates with enjoyment most.
  • 19.
    Could we measureit? Meyerovich and Rabkin also looked into what makes programmers happy. Want to guess? It’s expressiveness. That correlates with enjoyment most. So what language is most expressive? Is there a way to measure this?
  • 20.
    Danny Berkholz, researcherat RedMonk came up with a way to do this. He compared commit sizes of different projects (from Ohloh, covering 7.5 million project-months) His assumption is that a commit has more or less the same ‘value’ in terms of functionality over different languages. http://redmonk.com/dberkholz/2013/03/25/programming-languages-ranked-by-expressiveness/
  • 21.
    In the graph,the thick line indicates the median, the box represents 25 and 75% of the values and the lines 10 and 90%. http://redmonk.com/dberkholz/2013/03/25/programming-languages-ranked-by-expressiveness/
  • 23.
    In the graph,the thick line indicates the median, the box represents 25 and 75% of the values and the lines 10 and 90%. Let’s have a look at what language goes where! http://redmonk.com/dberkholz/2013/03/25/programming-languages-ranked-by-expressiveness/
  • 24.
    Loads of interestingthings to see here. For instance, all the popular languages (in red) are on the low end of expressiveness. This somehow corroborates Meyerovich findings: while programmers enjoy expressiveness, it seems not to be a factor for picking a language.
  • 25.
    Loads of interestingthings to see here. For instance, all the popular languages (in red) are on the low end of expressiveness. This somehow corroborates Meyerovich findings: while programmers enjoy expressiveness, it seems not to be a factor for picking a language. Interesting is also the huge difference between CoffeeScipt and JavaScript, while this might be due to the fact that CoffeeScript is young and commits are thus quite ‘clean’.
  • 26.
    Loads of interestingthings to see here. For instance, all the popular languages (in red) are on the low end of expressiveness. This somehow corroborates Meyerovich findings: while programmers enjoy expressiveness, it seems not to be a factor for picking a language. Interesting is also the huge difference between CoffeeScipt and JavaScript, while this might be due to the fact that CoffeeScript is young and commits are thus quite ‘clean’. Finally, unsurprising, functional (Haskell, F#, Lisps) = expressiveness.
  • 27.
    Another interesting factcame out of Meyerovich and Rabkin’s study.
  • 31.
  • 32.
    Stefan Hanenberg Static versus dynamic,does it really matter? Let’s experiment! Stefan Hanenberg tried to measure whether static typing has any benefits over dynamic typing.
  • 33.
    Two groups Stefan Hanenbergtried to measure whether static typing has any benefits over dynamic typing. He divided a group of students into two groups, one with a type system and one without, and had them perform small maintainability tasks. Let’s summarize his results... Star Wars style!
  • 34.
    On the left,we have the lover of dynamically typed stuff. He has gone through some ‘type casts’ in his life and he is sick of it! On the right is his static opponent. Let’s see how this turns out.
  • 35.
    But what about typecasting? One of the arguments that dynamic proponents have, is that type casting is annoying and time consuming. Hanenberg’s experiment showed:
  • 36.
    It does notreally matter But what about type casting? One of the arguments that dynamic proponents have, is that type casting is annoying and time consuming. Hanenberg’s experiment showed: It does not really matter. For programs over 10 LOC, you are not slower if you have to do type casting.
  • 37.
    I’m sure Ican fix type errors just as quickly
  • 38.
    I’m sure Ican fix type errors just as quickly Not even close
  • 39.
    I’m sure Ican fix type errors just as quickly Not even close The differences are HUGE! Blue bar = Groovy, Green = Java Vertical axis = time
  • 40.
    I’m sure Ican fix type errors just as quickly Not even close The differences are HUGE! Blue bar = Groovy, Green = Java Vertical axis = time In some cases, the run-time errors occurred at the same line where the compiler found a type error.
  • 41.
    But… my dynamically typedAPIs, they must be quicker to use
  • 42.
    But… my dynamically typedAPIs, they must be quicker to use No
  • 43.
  • 44.
    I’ll document the APIs! Won’t help! Turnsout, type names help more than documentation.
  • 45.
    I’ll use abetter IDE!
  • 46.
  • 47.
    Stefan Hanenberg It lookslike (Java- like) static type systems really help in development! While more research is needed, we might conclude this.
  • 48.
    Do design patternswork? Let’s tackle another one!
  • 49.
    Walter Tichy Let’s tackleanother one! Walter Tichy wanted to know whether design patterns really help development. He started small, with a group of students, testing whether giving them info on design patterns helped understandability.
  • 50.
    Again two groups, but different Let’stackle another one! Walter Tichy wanted to know whether design patterns really help development. He started small, with a group of students, testing whether giving them info on design patterns helped understandability. Again, students were divided into two groups, but setup was a bit different.
  • 51.
    There were twoprograms used (PH and AOT) and some students got with the version documentation first and without second. On different programs obviously, otherwise the students would know the patterns were there in the second test. Prechelt et al, 2002. Two controlled experiments assessing the usefulness of design pattern documentation in program maintenance. TSE 28(6): 595-606
  • 52.
    The results clearlyshow that knowing a pattern is there, helps performing maintenance tasks. However, it was not entirely fair to measure time, as not all solutions were correct. If you look at the best solutions, you see a clear difference in favor of the documented version. Ticky updated the study design in the next version, where only correct solutions were taken into account.
  • 53.
    Again two groups, but again different Theresults clearly show that knowing a pattern is there, helps performing maintenance tasks. However, it was not entirely fair to measure time, as not all solutions were correct. If you look at the best solutions, you see a clear difference in favor of the documented version. Ticky updated the study design in the next version, where only correct solutions were taken into account.
  • 54.
    In this nextversion, professionals were used instead of students. Furthermore, the setup was different. The same experiment was done twice, first without participants knowing patterns. Then, they did a course and after that again they did a test.
  • 55.
    Again, results showedthat version with patterns turned out to be easier to modify.
  • 56.
    Again, results showedthat version with patterns tuned out to be easier to modify. For some patterns though (like Observer) the differences in pre- and posttest were really big. For these patterns, the course made a big difference. In other words: patterns only help if you understand them.
  • 57.
  • 58.
    It has todo with how the human brain works
  • 59.
    Long term memory Short term memory Thehuman memory works a bit like a computer. Long term memory can save stuff for a long time, but it is slow. Short term memory is quick, but can only retain about 7 items. Using patterns, you only use 1 slot “this is an observer pattern” rather than multiple for “this class is notified when something happens in this other class” "The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information“, George Miller, 1956
  • 60.
  • 61.
  • 62.
    Given enough eyeballs, allbugs are shallow Does Linus’ Law’ hold?
  • 63.
    Appearently, the lawis not so universal, we know that code reviews are hard to do right for larger pieces of software.
  • 64.
    Not impressed? Appearently, the lawis not so universal, we know that code reviews are hard to do right for larger pieces of software. Yeah, a tweet is not exactly science. Don’t worry, I have some proof too.
  • 65.
    Researcher at Microsoftresearch published a study in which they connected the number of bugs in the release of Vista (gathered through bug report) with organizational metrics, like the number of people that worked on a particular binary. Nagappan et al, 2008. The Influence of Organizational Structure On Software Quality: An Empirical Case Study, ICSE 2008
  • 66.
    They found thatthe opposite of Linus’ law is true. The more people work on a piece of code, the more error-prone it is! Nagappan et al, 2008. The Influence of Organizational Structure On Software Quality: An Empirical Case Study, ICSE 2008 More touchers -> more bugs
  • 67.
    More tied to bugsthan any code metric They found that the opposite of Linus’ law is true. The more people work on a piece of code, the more error-prone it is! These, and other, organizational big are more tied to quality than any other code metric! Nagappan et al, 2008. The Influence of Organizational Structure On Software Quality: An Empirical Case Study, ICSE 2008
  • 68.
    They found thatthe opposite of Linus’ law is true. The more people work on a piece of code, the more error-prone it is! These, and other, organizational big are more tied to quality than any other code metric! This means that if you want to predict future defects, the best you can do is look at the company! Might be me, but I think that is surprising. Nagappan et al, 2008. The Influence of Organizational Structure On Software Quality: An Empirical Case Study, ICSE 2008
  • 69.
    Putting the science incomputer science Felienne Delft University of Technology That’s it! I hope you got a sense for the usefullness of software engineering research in practice. If you want to keep up, follow my blog where I regularly blog about the newest SE research.

Editor's Notes

  • #41 Static type system really helped in finding errors. Often
  • #45 Type names help more than documentation!