Jonathan D. Lettvin provides a summary of his skills and experience as a software engineer and inventor. He has over 30 years of experience producing technical solutions for companies in various industries. Some of his specialties include data visualization, security, medical devices, robotics, and scientific computing. He is proficient in many programming languages and techniques. His objective is to solve complex technical problems by inventing new algorithms and approaches.
Building a Dynamic Bidding system for a location based Display advertising Pl...Ekta Grover
Experimentation to Productization : Building a Dynamic Bidding system for a location aware Ecosystem, Slides from my Fifth Elephant talk, Bangalore, 2014
Open & reproducible research - What can we do in practice?Felix Z. Hoffmann
- There is a reproducibility crisis in computational research even when code is made available. Out of 206 computational studies in Science magazine since a policy change mandating sharing, only 26 directly provided their code and data. Of those judged potentially reproducible when code was available, more than half still required significant effort to reproduce.
- Making research fully reproducible requires addressing issues like difficult computational environments, long run times, dependency on previous results, and clarity on what is required to reproduce a single finding. Following principles like ensuring code is re-runnable, repeatable, reproducible, reusable, and replicable can help achieve reproducibility. Publishing code on platforms like Zenodo and OSF can also aid reproducibility.
Provenance in Production-Grade Machine LearningAnand Sampat
Over the next few years, every company must develop a strategy to leverage artificial intelligence and machine learning to stay relevant and beat out competitors. This requires hiring talented data scientists as well as DevOps and data engineers who can put these into production. Today, finding that perfect combination of talent can be difficult, but a focus on retraining and productivity tools can increase a small team’s impact on business ROI by over 10x. In this technical talk, we discuss how enterprises can better prepare their employees to deploy artificial intelligence and machine learning into production by using the same techniques used in software to add provenance, reliability, and efficiency to these processes. Specifically, we describe the benefits of adding provenance including reliable deployments and builds, A/B testing, continuous deployment, and automation and show how they can decrease the time to business ROI by over 10x.
Version Control in Machine Learning + AI (Stanford)Anand Sampat
Starting with outlining the history of conventional version control before diving into explaining QoDs (Quantitative Oriented Developers) and the unique problems their ML systems pose from an operations perspective (MLOps). With the only status quo solutions being proprietary in-house pipelines (exclusive to Uber, Google, Facebook) and manual tracking/fragile "glue" code for everyone else.
Datmo works to solve this issue by empowering QoDs in two ways: making MLOps manageable and simple (rather than completely abstracted away) as well as reducing the amount of glue code so to ensure more robust end-to-end pipelines.
This goes through a simple example of using Datmo with an Iris classification dataset. Later workshops will expand to show how Datmo can work with other data pipelining tools.
Machine Learning in Static Analysis of Program Source CodeAndrey Karpov
Machine learning has firmly entrenched in a variety of human fields, from speech recognition to medical diagnosing. The popularity of this approach is so great that people try to use it wherever they can. Some attempts to replace classical approaches with neural networks turn up unsuccessful. This time we'll consider machine learning in terms of creating effective static code analyzers for finding bugs and potential vulnerabilities.
SDCSB Advanced Tutorial: Reproducible Data Visualization Workflow with Cytosc...Keiichiro Ono
This document provides an overview of a tutorial on building reproducible network data visualization workflows using Cytoscape and IPython Notebook. The tutorial will cover integrating data, analyzing networks, visualizing results, and preparing outputs for publication. It will demonstrate setting up a portable data analysis environment using Docker and sharing work through GitHub. The bulk of the tutorial will focus on using IPython Notebook as an electronic lab notebook for interactive and reproducible experiments with Cytoscape.
Thomas Mylonas has a Bachelor's Degree in Computer Science and Mathematics. He has developed several projects including a Sudoku puzzle solver using constraint satisfaction problems, games for playing different types of Sudoku puzzles, a diary application for Android and Qt, an inverted index and graph algorithms implemented in C++, and algorithms for the closest pair problem and summing tetrahedral numbers in Java. He has also done work in web development, software engineering, object oriented analysis, scheduling techniques, digital circuits, and electronics simulation.
Building a Dynamic Bidding system for a location based Display advertising Pl...Ekta Grover
Experimentation to Productization : Building a Dynamic Bidding system for a location aware Ecosystem, Slides from my Fifth Elephant talk, Bangalore, 2014
Open & reproducible research - What can we do in practice?Felix Z. Hoffmann
- There is a reproducibility crisis in computational research even when code is made available. Out of 206 computational studies in Science magazine since a policy change mandating sharing, only 26 directly provided their code and data. Of those judged potentially reproducible when code was available, more than half still required significant effort to reproduce.
- Making research fully reproducible requires addressing issues like difficult computational environments, long run times, dependency on previous results, and clarity on what is required to reproduce a single finding. Following principles like ensuring code is re-runnable, repeatable, reproducible, reusable, and replicable can help achieve reproducibility. Publishing code on platforms like Zenodo and OSF can also aid reproducibility.
Provenance in Production-Grade Machine LearningAnand Sampat
Over the next few years, every company must develop a strategy to leverage artificial intelligence and machine learning to stay relevant and beat out competitors. This requires hiring talented data scientists as well as DevOps and data engineers who can put these into production. Today, finding that perfect combination of talent can be difficult, but a focus on retraining and productivity tools can increase a small team’s impact on business ROI by over 10x. In this technical talk, we discuss how enterprises can better prepare their employees to deploy artificial intelligence and machine learning into production by using the same techniques used in software to add provenance, reliability, and efficiency to these processes. Specifically, we describe the benefits of adding provenance including reliable deployments and builds, A/B testing, continuous deployment, and automation and show how they can decrease the time to business ROI by over 10x.
Version Control in Machine Learning + AI (Stanford)Anand Sampat
Starting with outlining the history of conventional version control before diving into explaining QoDs (Quantitative Oriented Developers) and the unique problems their ML systems pose from an operations perspective (MLOps). With the only status quo solutions being proprietary in-house pipelines (exclusive to Uber, Google, Facebook) and manual tracking/fragile "glue" code for everyone else.
Datmo works to solve this issue by empowering QoDs in two ways: making MLOps manageable and simple (rather than completely abstracted away) as well as reducing the amount of glue code so to ensure more robust end-to-end pipelines.
This goes through a simple example of using Datmo with an Iris classification dataset. Later workshops will expand to show how Datmo can work with other data pipelining tools.
Machine Learning in Static Analysis of Program Source CodeAndrey Karpov
Machine learning has firmly entrenched in a variety of human fields, from speech recognition to medical diagnosing. The popularity of this approach is so great that people try to use it wherever they can. Some attempts to replace classical approaches with neural networks turn up unsuccessful. This time we'll consider machine learning in terms of creating effective static code analyzers for finding bugs and potential vulnerabilities.
SDCSB Advanced Tutorial: Reproducible Data Visualization Workflow with Cytosc...Keiichiro Ono
This document provides an overview of a tutorial on building reproducible network data visualization workflows using Cytoscape and IPython Notebook. The tutorial will cover integrating data, analyzing networks, visualizing results, and preparing outputs for publication. It will demonstrate setting up a portable data analysis environment using Docker and sharing work through GitHub. The bulk of the tutorial will focus on using IPython Notebook as an electronic lab notebook for interactive and reproducible experiments with Cytoscape.
Thomas Mylonas has a Bachelor's Degree in Computer Science and Mathematics. He has developed several projects including a Sudoku puzzle solver using constraint satisfaction problems, games for playing different types of Sudoku puzzles, a diary application for Android and Qt, an inverted index and graph algorithms implemented in C++, and algorithms for the closest pair problem and summing tetrahedral numbers in Java. He has also done work in web development, software engineering, object oriented analysis, scheduling techniques, digital circuits, and electronics simulation.
Biology, medicine, physics, astrophysics, chemistry: all these scientific domains need to process large amount of data with more and more complex software systems. For achieving reproducible science, there are several challenges ahead involving multidisciplinary collaboration and socio-technical innovation with software at the center of the problem. Despite the availability of data and code, several studies report that the same data analyzed with different software can lead to different results. I am seeing this problem as a manifestation of deep software variability: many factors (operating system, third-party libraries, versions, workloads, compile-time options and flags, etc.) themselves subject to variability can alter the results, up to the point it can dramatically change the conclusions of some scientific studies. In this keynote, I argue that deep software variability is a threat and also an opportunity for reproducible science. I first outline some works about (deep) software variability, reporting on preliminary evidence of complex interactions between variability layers. I then link the ongoing works on variability modelling and deep software variability in the quest for reproducible science.
CGO/PPoPP'17 Artifact Evaluation Discussion (enabling open and reproducible r...Grigori Fursin
This year we had a record number of artifact submissions at CGO/PPoPP'17: 27 vs 17 two years ago. It is really great to see that researchers are now taking AE seriously, but it also highlighted new issues with AE scalability and lack of common experimental methodology and workflow frameworks in computer systems' research. Therefore, we discussed a few possible solutions for the next AE including public artifact reviewing, common workflow frameworks, artifact appendices, partial artifact evaluation (artifact available, artifact validated, experiment reproduced) and "tool" papers. Please, feel free to provide your own feedback to the AE steering committee!
More details:
* http://dividiti.blogspot.fr/2017/01/artifact-evaluation-discussion-session.html
* http://cTuning.org/ae
* http://cKnowledge.org
Runtime Behavior of JavaScript ProgramsIRJET Journal
The document analyzes the dynamic behavior of JavaScript programs by collecting execution traces from 103 websites and benchmark suites. It finds that JavaScript programs exhibit more dynamism than commonly assumed, with only 81% of call sites being monomorphic and many functions being variadic. Constructor functions also frequently return objects with different property sets. The study aims to provide a more accurate characterization of JavaScript behavior to inform future research.
Reactive Microservices with Spring 5: WebFlux Trayan Iliev
On November 27 Trayan Iliev from IPT presented “Reactive microservices with Spring 5: WebFlux” @Dev.bg in Betahaus Sofia. IPT – Intellectual Products & Technologies has been organizing Java & JavaScript trainings since 2003.
Spring 5 introduces a new model for end-to-end functional and reactive web service programming with Spring 5 WebFlow, Spring Data & Spring Boot. The main topics include:
– Introduction to reactive programming, Reactive Streams specification, and project Reactor (as WebFlux infrastructure)
– REST services with WebFlux – comparison between annotation-based and functional reactive programming approaches for building.
– Router, handler and filter functions
– Using reactive repositories and reactive database access with Spring Data. Building end-to-end non-blocking reactive web services using Netty-based web runtime
– Reactive WebClients and integration testing. Reactive WebSocket support
– Realtime event streaming to WebClients using JSON Streams, and to JS client using SSE.
#1 The diversity of terminology shows the large spectrum of shapes DSLs can take.
#2 As syntax and development environment matter, DSLs should allow the user to choose the right shape according to their usage or task.
#3 A metamorphic DSL vision is proposed where DSLs can adapt to the most appropriate shape, including transitioning between shapes based on usage or task.
Sensor data is streamed in realtime from Arduino + accelerometeres, gyroscopes & compass 3D, ultrasound distance sensor, etc. using UDP protocol. The data processing is done with reactive Java alterantive implementations: callbacks, CompletableFutures and using Spring 5 Reactor library. The web 3D visualization with Three.js is streamed using Server Sent Events (SSE).
A video for the IoT demo is available @YouTube: https://www.youtube.com/watch?v=AB3AWAfcy9U
All source code of the demo is freely available @GitHub: https://github.com/iproduct/reactive-demos-iot
There are more reactive Java demos in the same repository - callbacks, CompletableFuture, realtime event streaming. Soon I'll add a description how to build the device and upload Arduino sketch, as well as describe CompletableFuture and Reactor demos and 3D web visualization part with Three.js. Please stay tuned :)
Most modern software systems are subject to variation or come in many variants. Web browsers like Firefox or Chrome are available on different operating systems, in different languages, while users can configure 2000+ preferences or install numerous 3rd parties extensions (or plugins). Web servers like Apache, operating systems like the Linux kernel, or a video encoder like x264 are other examples of software systems that are highly configurable at compile-time or at run-time for delivering the expected functionality andmeeting the various desires of users. Variability ("the ability of a software system or artifact to be efficiently extended, changed,customized or configured for use in a particular context") is therefore a crucial property of software systems. Organizations capable of mastering variability can deliver high-quality variants (or products) in a short amount of time and thus attract numerous customers, new use-cases or usage contexts. A hard problem for end-users or software developers is to master the combinatorial explosion induced by variability: Hundreds of configuration options can be combined, each potentially with distinct functionality and effects on execution time, memory footprint, quality of the result, etc. The first part of this course will introduce variability-intensive systems, their applications and challenges, in various software contexts. We will use intuitive examples (like a generator of LaTeX paper variants) and real-world systems (like the Linux kernel). A second objective of this course is to show the relevance of ArtificialIntelligence (AI) techniques for exploring and taming such enormous variability spaces. In particular, we will introduce how (1) satisfiability and constraint programming solvers can be used to properly model and reason about variability; (2) how machine learning can be used to discover constraints and predict the variability behavior of configurable systems or software product lines.
http://ejcp2019.icube.unistra.fr/
Presentation from BGOUG conference Nov 17, 2017.
Since September 2017, Java 9 is generally available. It offers many enhancements:
• Modularity – provides clear separation between public and private APIs, stronger encapsulation & dependency management.
• JShell – using and customizing Java 9 interactive shell by example
• Process API updates – feature-rich, async OS process management and statistics
• Reactive Streams, CompletableFuture and Stream API updates
• Building asynchronous HTTP/2 and WebSocket pipelines using HTTP/2 Client and CompletableFuture composition
• Collection API updates
• Stack walking, and other language enhancements (Project Coin)
Discussed topics are accompanied by live demos available for further review @ github.com/iproduct.
Cytoscape and External Data Analysis ToolsKeiichiro Ono
This document summarizes Keiichiro Ono's lab meeting presentation about developing a RESTful API for Cytoscape. The presentation covered the motivation for external tools to programmatically access Cytoscape, the design of a new Cytoscape module that exposes a RESTful API, and a proof-of-concept demo. The goal is to make Cytoscape more accessible for hardcore users to embed in automated workflows from languages like R and Python.
This document proposes exploiting the enumeration of all configurations of a feature model as a new perspective for automated reasoning with distributed computing. It discusses (1) enumerating configurations in parallel to improve scalability, (2) pre-compiling configurations offline to speed up costly operations like counting, core, and dead features, and (3) how the approach is not always best and depends on the feature model, operation, and time requirements. Preliminary evaluations show the approach is more efficient for large models but not always, and future work is needed to determine when enumeration is best versus traditional solvers.
Thomas Haley is seeking a computer science-related position. He is currently pursuing a Bachelor of Science in Computer Science from Wentworth Institute of Technology, with an expected graduation date of August 2018. His relevant experience includes internships at Bose Corporation as an Automation Software Test Engineer and Software Test Engineer where he wrote Python scripts for testing and collaborated with teams. He also has retail experience packing and shipping online orders.
Software Analytics: Data Analytics for Software EngineeringTao Xie
This document summarizes a presentation on software analytics and its achievements and opportunities. It begins by noting how both how software and how it is built and operated are changing, with data becoming more pervasive and development more distributed. It then defines software analytics as enabling analysis of software data to obtain insights and make informed decisions. It outlines research topics covering different areas of the software domain throughout the development cycle. It describes target audiences of software practitioners and outputs of insightful and actionable information. Selected projects demonstrating software analytics are then summarized, including StackMine for performance debugging at scale, XIAO for scalable code clone analysis, and others.
El documento describe los principales órganos estatales encargados de las relaciones internacionales. Menciona que el jefe de estado es el órgano supremo y que los ministerios o secretarías de relaciones exteriores manejan las relaciones con otros países. También describe los agentes diplomáticos permanentes y transitorios que representan al estado en el extranjero y gozan de inmunidad diplomática. En resumen, los órganos clave de las relaciones internacionales de un estado son el jefe de estado, el ministerio de relaciones exteriores
Living Well with Cancer Presentation (Webinar)KellyGCDET
This document discusses the importance of nutrition for cancer patients. It notes that malnutrition is common in 50% of cancer patients and is associated with weight loss, fatigue, weakness and impaired treatment tolerance. Early nutrition intervention can help preserve muscle mass and strength, improving quality of life and ability to complete cancer treatment. Screening tools like the Malnutrition Screening Tool and Patient Generated Subjective Global Assessment are recommended to assess nutritional status and guide appropriate nutrition support and interventions.
This chapter discusses circular motion, gravitation, and other related topics. It explains that an object in uniform circular motion has centripetal acceleration towards the center of the circle. For an object to undergo uniform circular motion, there must be a net centripetal force acting on it. Newton's law of universal gravitation describes the gravitational force between two objects. Satellites are able to stay in orbit around Earth due to their high tangential speed, which allows them to continually fall towards Earth but remain in orbit.
NFMNT Chapter 6 Fundamentals of Medical Nutrition Therapy for the CDMKellyGCDET
This document discusses medical nutrition therapy for various chronic diseases. It begins by defining medical nutrition therapy and outlining its two parts - nutritional assessment and treatment/intervention. It then covers specific MNT for conditions like obesity, cardiovascular disease, diabetes, cancer, and HIV/AIDS. Key aspects of MNT are identified for each condition, such as focusing on portion control and exercise for obesity or reducing sodium and increasing potassium for hypertension. The goal of MNT is to help manage diseases through therapeutic diets, counseling, and nutrition support.
The critical path method (CPM) is an algorithm developed in the late 1950s to schedule project activities. It involves identifying all the paths in a project, determining the earliest and latest start and finish dates for each activity, and identifying the critical activities that cannot be delayed without delaying project completion. The critical path is the longest path of critical activities that determines the minimum time required to complete the project. NASA used CPM to help schedule the tasks leading up to the first moon landing in 1969.
Dave Thomas is a software architect and engineer located in San Francisco, CA. He has 17 years of experience building startups and enterprises as a part-time CTO and technical advisor. He is a strong polyglot developer proficient in Java, Scala, Clojure, and many other languages. He has deep experience with DevOps practices and tools like Docker, AWS, Ansible, and Continuous Delivery. His background includes roles as CTO for Delicious and Verifi, and he currently works on his open-source project PeopleMerge.
Jeffrey Olson has 12 years of experience designing and developing enterprise software. He has expertise in Java, Groovy, Perl, JavaScript, C#, J2EE, Spring, Hibernate, Grails, and other frameworks and tools. At his current role, he leads a team developing a tax research application and has introduced practices like unit testing, continuous integration, and agile methodologies. He aims to use his strong technical and leadership skills to contribute to the success of an organization.
Christine Straub is an experienced machine learning engineer and data scientist with skills in Python, natural language processing, deep learning, computer vision, cloud computing, and data analysis. She has work experience developing AI chatbots, building scalable data warehouses, creating machine learning pipelines, and analyzing bio-metrics and geo-image data. Her education includes a BS in Computer Science from UC Berkeley and machine learning courses from Stanford.
Biology, medicine, physics, astrophysics, chemistry: all these scientific domains need to process large amount of data with more and more complex software systems. For achieving reproducible science, there are several challenges ahead involving multidisciplinary collaboration and socio-technical innovation with software at the center of the problem. Despite the availability of data and code, several studies report that the same data analyzed with different software can lead to different results. I am seeing this problem as a manifestation of deep software variability: many factors (operating system, third-party libraries, versions, workloads, compile-time options and flags, etc.) themselves subject to variability can alter the results, up to the point it can dramatically change the conclusions of some scientific studies. In this keynote, I argue that deep software variability is a threat and also an opportunity for reproducible science. I first outline some works about (deep) software variability, reporting on preliminary evidence of complex interactions between variability layers. I then link the ongoing works on variability modelling and deep software variability in the quest for reproducible science.
CGO/PPoPP'17 Artifact Evaluation Discussion (enabling open and reproducible r...Grigori Fursin
This year we had a record number of artifact submissions at CGO/PPoPP'17: 27 vs 17 two years ago. It is really great to see that researchers are now taking AE seriously, but it also highlighted new issues with AE scalability and lack of common experimental methodology and workflow frameworks in computer systems' research. Therefore, we discussed a few possible solutions for the next AE including public artifact reviewing, common workflow frameworks, artifact appendices, partial artifact evaluation (artifact available, artifact validated, experiment reproduced) and "tool" papers. Please, feel free to provide your own feedback to the AE steering committee!
More details:
* http://dividiti.blogspot.fr/2017/01/artifact-evaluation-discussion-session.html
* http://cTuning.org/ae
* http://cKnowledge.org
Runtime Behavior of JavaScript ProgramsIRJET Journal
The document analyzes the dynamic behavior of JavaScript programs by collecting execution traces from 103 websites and benchmark suites. It finds that JavaScript programs exhibit more dynamism than commonly assumed, with only 81% of call sites being monomorphic and many functions being variadic. Constructor functions also frequently return objects with different property sets. The study aims to provide a more accurate characterization of JavaScript behavior to inform future research.
Reactive Microservices with Spring 5: WebFlux Trayan Iliev
On November 27 Trayan Iliev from IPT presented “Reactive microservices with Spring 5: WebFlux” @Dev.bg in Betahaus Sofia. IPT – Intellectual Products & Technologies has been organizing Java & JavaScript trainings since 2003.
Spring 5 introduces a new model for end-to-end functional and reactive web service programming with Spring 5 WebFlow, Spring Data & Spring Boot. The main topics include:
– Introduction to reactive programming, Reactive Streams specification, and project Reactor (as WebFlux infrastructure)
– REST services with WebFlux – comparison between annotation-based and functional reactive programming approaches for building.
– Router, handler and filter functions
– Using reactive repositories and reactive database access with Spring Data. Building end-to-end non-blocking reactive web services using Netty-based web runtime
– Reactive WebClients and integration testing. Reactive WebSocket support
– Realtime event streaming to WebClients using JSON Streams, and to JS client using SSE.
#1 The diversity of terminology shows the large spectrum of shapes DSLs can take.
#2 As syntax and development environment matter, DSLs should allow the user to choose the right shape according to their usage or task.
#3 A metamorphic DSL vision is proposed where DSLs can adapt to the most appropriate shape, including transitioning between shapes based on usage or task.
Sensor data is streamed in realtime from Arduino + accelerometeres, gyroscopes & compass 3D, ultrasound distance sensor, etc. using UDP protocol. The data processing is done with reactive Java alterantive implementations: callbacks, CompletableFutures and using Spring 5 Reactor library. The web 3D visualization with Three.js is streamed using Server Sent Events (SSE).
A video for the IoT demo is available @YouTube: https://www.youtube.com/watch?v=AB3AWAfcy9U
All source code of the demo is freely available @GitHub: https://github.com/iproduct/reactive-demos-iot
There are more reactive Java demos in the same repository - callbacks, CompletableFuture, realtime event streaming. Soon I'll add a description how to build the device and upload Arduino sketch, as well as describe CompletableFuture and Reactor demos and 3D web visualization part with Three.js. Please stay tuned :)
Most modern software systems are subject to variation or come in many variants. Web browsers like Firefox or Chrome are available on different operating systems, in different languages, while users can configure 2000+ preferences or install numerous 3rd parties extensions (or plugins). Web servers like Apache, operating systems like the Linux kernel, or a video encoder like x264 are other examples of software systems that are highly configurable at compile-time or at run-time for delivering the expected functionality andmeeting the various desires of users. Variability ("the ability of a software system or artifact to be efficiently extended, changed,customized or configured for use in a particular context") is therefore a crucial property of software systems. Organizations capable of mastering variability can deliver high-quality variants (or products) in a short amount of time and thus attract numerous customers, new use-cases or usage contexts. A hard problem for end-users or software developers is to master the combinatorial explosion induced by variability: Hundreds of configuration options can be combined, each potentially with distinct functionality and effects on execution time, memory footprint, quality of the result, etc. The first part of this course will introduce variability-intensive systems, their applications and challenges, in various software contexts. We will use intuitive examples (like a generator of LaTeX paper variants) and real-world systems (like the Linux kernel). A second objective of this course is to show the relevance of ArtificialIntelligence (AI) techniques for exploring and taming such enormous variability spaces. In particular, we will introduce how (1) satisfiability and constraint programming solvers can be used to properly model and reason about variability; (2) how machine learning can be used to discover constraints and predict the variability behavior of configurable systems or software product lines.
http://ejcp2019.icube.unistra.fr/
Presentation from BGOUG conference Nov 17, 2017.
Since September 2017, Java 9 is generally available. It offers many enhancements:
• Modularity – provides clear separation between public and private APIs, stronger encapsulation & dependency management.
• JShell – using and customizing Java 9 interactive shell by example
• Process API updates – feature-rich, async OS process management and statistics
• Reactive Streams, CompletableFuture and Stream API updates
• Building asynchronous HTTP/2 and WebSocket pipelines using HTTP/2 Client and CompletableFuture composition
• Collection API updates
• Stack walking, and other language enhancements (Project Coin)
Discussed topics are accompanied by live demos available for further review @ github.com/iproduct.
Cytoscape and External Data Analysis ToolsKeiichiro Ono
This document summarizes Keiichiro Ono's lab meeting presentation about developing a RESTful API for Cytoscape. The presentation covered the motivation for external tools to programmatically access Cytoscape, the design of a new Cytoscape module that exposes a RESTful API, and a proof-of-concept demo. The goal is to make Cytoscape more accessible for hardcore users to embed in automated workflows from languages like R and Python.
This document proposes exploiting the enumeration of all configurations of a feature model as a new perspective for automated reasoning with distributed computing. It discusses (1) enumerating configurations in parallel to improve scalability, (2) pre-compiling configurations offline to speed up costly operations like counting, core, and dead features, and (3) how the approach is not always best and depends on the feature model, operation, and time requirements. Preliminary evaluations show the approach is more efficient for large models but not always, and future work is needed to determine when enumeration is best versus traditional solvers.
Thomas Haley is seeking a computer science-related position. He is currently pursuing a Bachelor of Science in Computer Science from Wentworth Institute of Technology, with an expected graduation date of August 2018. His relevant experience includes internships at Bose Corporation as an Automation Software Test Engineer and Software Test Engineer where he wrote Python scripts for testing and collaborated with teams. He also has retail experience packing and shipping online orders.
Software Analytics: Data Analytics for Software EngineeringTao Xie
This document summarizes a presentation on software analytics and its achievements and opportunities. It begins by noting how both how software and how it is built and operated are changing, with data becoming more pervasive and development more distributed. It then defines software analytics as enabling analysis of software data to obtain insights and make informed decisions. It outlines research topics covering different areas of the software domain throughout the development cycle. It describes target audiences of software practitioners and outputs of insightful and actionable information. Selected projects demonstrating software analytics are then summarized, including StackMine for performance debugging at scale, XIAO for scalable code clone analysis, and others.
El documento describe los principales órganos estatales encargados de las relaciones internacionales. Menciona que el jefe de estado es el órgano supremo y que los ministerios o secretarías de relaciones exteriores manejan las relaciones con otros países. También describe los agentes diplomáticos permanentes y transitorios que representan al estado en el extranjero y gozan de inmunidad diplomática. En resumen, los órganos clave de las relaciones internacionales de un estado son el jefe de estado, el ministerio de relaciones exteriores
Living Well with Cancer Presentation (Webinar)KellyGCDET
This document discusses the importance of nutrition for cancer patients. It notes that malnutrition is common in 50% of cancer patients and is associated with weight loss, fatigue, weakness and impaired treatment tolerance. Early nutrition intervention can help preserve muscle mass and strength, improving quality of life and ability to complete cancer treatment. Screening tools like the Malnutrition Screening Tool and Patient Generated Subjective Global Assessment are recommended to assess nutritional status and guide appropriate nutrition support and interventions.
This chapter discusses circular motion, gravitation, and other related topics. It explains that an object in uniform circular motion has centripetal acceleration towards the center of the circle. For an object to undergo uniform circular motion, there must be a net centripetal force acting on it. Newton's law of universal gravitation describes the gravitational force between two objects. Satellites are able to stay in orbit around Earth due to their high tangential speed, which allows them to continually fall towards Earth but remain in orbit.
NFMNT Chapter 6 Fundamentals of Medical Nutrition Therapy for the CDMKellyGCDET
This document discusses medical nutrition therapy for various chronic diseases. It begins by defining medical nutrition therapy and outlining its two parts - nutritional assessment and treatment/intervention. It then covers specific MNT for conditions like obesity, cardiovascular disease, diabetes, cancer, and HIV/AIDS. Key aspects of MNT are identified for each condition, such as focusing on portion control and exercise for obesity or reducing sodium and increasing potassium for hypertension. The goal of MNT is to help manage diseases through therapeutic diets, counseling, and nutrition support.
The critical path method (CPM) is an algorithm developed in the late 1950s to schedule project activities. It involves identifying all the paths in a project, determining the earliest and latest start and finish dates for each activity, and identifying the critical activities that cannot be delayed without delaying project completion. The critical path is the longest path of critical activities that determines the minimum time required to complete the project. NASA used CPM to help schedule the tasks leading up to the first moon landing in 1969.
Dave Thomas is a software architect and engineer located in San Francisco, CA. He has 17 years of experience building startups and enterprises as a part-time CTO and technical advisor. He is a strong polyglot developer proficient in Java, Scala, Clojure, and many other languages. He has deep experience with DevOps practices and tools like Docker, AWS, Ansible, and Continuous Delivery. His background includes roles as CTO for Delicious and Verifi, and he currently works on his open-source project PeopleMerge.
Jeffrey Olson has 12 years of experience designing and developing enterprise software. He has expertise in Java, Groovy, Perl, JavaScript, C#, J2EE, Spring, Hibernate, Grails, and other frameworks and tools. At his current role, he leads a team developing a tax research application and has introduced practices like unit testing, continuous integration, and agile methodologies. He aims to use his strong technical and leadership skills to contribute to the success of an organization.
Christine Straub is an experienced machine learning engineer and data scientist with skills in Python, natural language processing, deep learning, computer vision, cloud computing, and data analysis. She has work experience developing AI chatbots, building scalable data warehouses, creating machine learning pipelines, and analyzing bio-metrics and geo-image data. Her education includes a BS in Computer Science from UC Berkeley and machine learning courses from Stanford.
This document provides a summary of Andrew Barker's skills and experience as a software developer. It outlines his technical skills in languages like C, C++, Java, Perl, and databases like Oracle, MySQL, and SQL. It also lists his personal skills such as attention to detail, problem solving, and communication. His professional experience includes roles developing software for organizations like British Telecom, T-Mobile, and the BBC.
This document provides a summary of Andrew Barker's skills and experience as a software developer. It includes details of his technical skills in languages like C++, Java, Perl, and databases like Oracle, MySQL, and SQL. It also lists his personal skills such as attention to detail, problem solving, and communication. His experience includes over 25 years working as a consultant for companies like Atos, CSC, and British Telecom, where he developed and maintained various software solutions.
The document provides guidance on designing a complex web application by breaking it into multiple microservices or applications. It recommends asking questions about team size, traffic patterns, priorities for speed vs stability, existing APIs or libraries, and programming languages. Based on the answers, it suggests appropriate frameworks, languages, data storage, testing/deployment processes, and server/container management options. The overall goal is to modularize the application, leverage existing tools when possible, and not overengineer parts of the design.
Building and deploying LLM applications with Apache AirflowKaxil Naik
Behind the growing interest in Generate AI and LLM-based enterprise applications lies an expanded set of requirements for data integrations and ML orchestration. Enterprises want to use proprietary data to power LLM-based applications that create new business value, but they face challenges in moving beyond experimentation. The pipelines that power these models need to run reliably at scale, bringing together data from many sources and reacting continuously to changing conditions.
This talk focuses on the design patterns for using Apache Airflow to support LLM applications created using private enterprise data. We’ll go through a real-world example of what this looks like, as well as a proposal to improve Airflow and to add additional Airflow Providers to make it easier to interact with LLMs such as the ones from OpenAI (such as GPT4) and the ones on HuggingFace, while working with both structured and unstructured data.
In short, this shows how these Airflow patterns enable reliable, traceable, and scalable LLM applications within the enterprise.
https://airflowsummit.org/sessions/2023/keynote-llm/
Sudipta Mukherjee has over 18 years of experience as a software developer and leader with expertise in machine learning, compilers, and functional programming. They have authored 6 books on programming topics and regularly presents at international conferences. Their skills include C#, F#, Python, machine learning, domain-specific languages, and data analytics.
Prakash Mishra is seeking a position that allows professional and personal growth. He has over 2 years of experience as a Design Engineer and Member of Technical Staff doing embedded software development. His skills include C/C++, Python, shell scripting, Linux, Jenkins, Coverity, Git, Jira, and more. He has experience developing algorithms, automating builds and reports, testing, code reviews, and fixing bugs in image processing, automotive, and other domains. He holds a B.Tech in Computer Science and a PG Diploma in System Software Development.
IRJET- Voice to Code Editor using Speech RecognitionIRJET Journal
This document presents a summary of a research paper on developing a voice-controlled code editor using speech recognition. A team of students and a professor from S.B Jain Institute of Technology, Management and Research created a Java program editor that allows users to write code using voice commands. The editor takes advantage of the natural human ability to speak language and allows coding more accurately and intuitively compared to manual typing. It analyzes the user's speech using acoustic and language modeling with Hidden Markov Models to accurately recognize commands. The proposed voice-controlled code editor is designed to reduce typing errors, improve coding speed, and enable people with disabilities to operate a computer. It will support basic editing tasks and allow switching between voice and manual input.
S. Ramkumar is a software developer with over 5 years of experience in Python, Perl, and Unix shell scripting. He has worked on projects for clients like Verizon and Morgan Stanley, developing scripts for automation, data processing, reporting and more. His skills include writing scripts for tasks like system monitoring, file transfer, database integration, and report generation. He is looking for a challenging position that utilizes his programming and problem-solving abilities.
This document is a resume for James E. Owen, a software developer and technical leader with over 25 years of experience. It summarizes his skills and accomplishments in areas like software development, leadership, problem solving, communication and teamwork. Notable projects include developing energy management systems, electronic whiteboard applications, and document management software. He is proficient in languages like Java, C#, Python and C/C++.
BigScience is a one-year research workshop involving over 800 researchers from 60 countries to build and study very large multilingual language models and datasets. It was granted 5 million GPU hours on the Jean Zay supercomputer in France. The workshop aims to advance AI/NLP research by creating shared models and data as well as tools for researchers. Several working groups are studying issues like bias, scaling, and engineering challenges of training such large models. The first model, T0, showed strong zero-shot performance. Upcoming work includes further model training and papers.
Tony Reid has over 30 years of experience as a software engineer and analyst with expertise in C#, ASP, Visual Basic, databases like Oracle and SQL Server, and methodologies including Agile and Waterfall. He has strong communication skills and experience managing both in-house and offshore development teams. Currently he is a Senior Systems Analyst at Eli Lilly where he has led projects involving data migration, application development, and technical support.
Kunal Bhatia has over 15 years of experience as a full stack software engineer specializing in Java/JEE development. He has worked on diverse projects including web applications, microservices, mobile apps, and voice/IVR systems. Currently he works as a microservices developer at Centene Corporation where he develops APIs using Java and Golang and implements CI/CD pipelines.
Fred McLain has over 15 years of experience as a software engineer and technical lead. He currently works at General Dynamics developing software for NASA's satellite communications systems. Previously he has worked on aircraft structural analysis tools at Boeing and developed open source accessibility tools for blind developers. He has extensive experience with Java, REST, distributed systems, and Agile development practices.
Jagrat Mankad has over 15 years of experience as a software developer and principal. He has expertise in C#, C/C++, Java, Python and SQL programming languages. Some of the applications he has worked on include an idea tracking tool called IdeaWorks, a test automation framework in Python, and several tools to aid in testing central systems. He is skilled in all phases of the software development life cycle from requirements gathering to delivery. He has a bachelor's degree in computer engineering and holds a US patent.
Laran Evans is a software developer and technical leader seeking new opportunities. He has over 15 years of experience leading development teams and managing complex software projects. His background includes roles managing custom application development, overseeing a Kuali financial system implementation, and leading the development of several modules for Cornell University's financial applications.
1. Jonathan D. Lettvin
27 Valley Hill Drive Worcester, Massachusetts 01602-2023
Phone: 617-828-5491 E-Mail: jlettvin@gmail.com
http://www.linkedin.com/in/jonathanlettvin
Objective
I am at my best solving technical and scientific problems requiring new insights and algorithms. My employers and
clients have all profited from my inventions in their business space. I relentlessly research process and algorithmic value-
add propositions to achieve your goals. I hold five software patents, two in antivirus, and three in image processing. I
enjoy programming in Python, PHP, C++, javascript, and ASM where I use mathematical libraries like scipy, OpenCV,
and OpenCL. I learn languages and write new ones when needed by my employer. I have a particular knack for value
discovery in big data. Let me invent for you.
I have worked in data visualization, biometrics, language (both human and computer), operating systems, data storage,
backbone network, physics, market data, software security, medical devices, robotics, scientific satellites, consumer
applications, and more. I have produced solutions for MIT, Lotus/IBM, NASA, Navy, Bell Labs, Carbonite and Fidelity.
I have written many original commercial products including real-time operating systems, boot sectors, file systems,
device drivers, EEPROM code, glue layers, TCP/IP primitive operations, IDL compilers, image processing language
interpreters, programmers editors, web scrapers, automated code generators, language-to-language filters, software
license ticket generators, XML lexers, canonicalizers, automated traceability matrix generators, heterogeneous SQL table
column comparators, high speed market data ingesters, antivirus applications, and much more. I wrote a C++ program
to distribute charges within a sphere based on repulsion to illustrate Gauss's law and it was referenced in a recent book
(in my github repository). Some of these products have been delivered to over a million users. Many are still in use.
I spend my free time generating software utilities to support my personal investigation into neuron structure function,
and emergent behaviors. Some of these show up in my github repository (see links below). Others show up as web
pages (see http://rote.training). This keeps my language skills current including C++, ASM, Python, PHP, javascript...
My motto is "small fast correct complete tested". I like to make code solid, clear, documented, optimized, tested, and
maintainable. I have achieved zero-bugs, complete code, and 100% tested when my employers chose to allow a modest
investment of time. I focus down and "get the job done", but I also "spitball" and contribute ideas and implementation in
a group, I find pair programming in Agile "Extreme Programming" highly productive. My recent insights into
development processes have led to a publication combining Agile and Waterfall. I attack seemingly insoluble problems. I
learn my way into disciplines, technologies, and languages that enable me to generate new solutions for you.
Skills
A sample of my alphabet soup: linux, unix, Mac OS X, Windows, DOS, asm, C, C++, Python, PHP, MySQL, javascript,
node.js, socket.io, bash, gnuplot, graphviz, mediawiki, HTML/HTML5, XML, LaTeX, JSON, STL, IDL, scipy, itertools,
unittest, pytest, coverage, OpenGL, OpenCV, OpenCL, matplotlib, git, svn, virtualbox, gcov, valgrind, doxygen, pep8,
pylint, pychecker. I have deep experience in many additional languages, libraries, and platforms.
Experience
Deltek (through TPA) (Woburn, MA) Jun/2016-Aug/2016 consulting software engineer
Short term contract: Wrote VB.NET server code and javascript front-end code for major deadline.
Altman & Vilandrie (Boston, MA) May/2016-Jun/2016 consulting software engineer
Short term contract: Configured Vagrant to launch market data ingest server.
IVES, Inc (Sutton, MA) Oct/2014-Jun/2015 senior software engineer
Compared SQL columns with mismatched schemae finding commonalities/differences. A large customer was retained.
Spritz Inc. (Wakefield, MA) Apr/2014-Oct/2014 field scientist
Normalized/rendered non-European glyphs in a resized reticle. I explored/documented requirements for reticle fit and
justification. Spritz now operates in more languages (i.e. http://lettvin.com/Rendering.html).
2. JDL
Editshare (Allston, MA) Aug/2013-Mar/2014 senior software engineer
Wrote LTFS (Linear Tape File System) device drivers in Python. Tools for analyzing logs and SCSI stream state
transitions were developed to aid in maintenance and improvement.
Carbonite (Boston, MA) Apr/2012-Apr/2013 principal software engineer
Carbonite needed to choose between RAID6 storage and distributed storage products of Amplidata and Cleversafe based
on storage efficiency (bytes/dollar). No tools existed to perform the calculations, or to apply them over a variety of file
sizes. My tasks were to negotiate with the vendors, acquire the mathematical formulae, write the analysis code, and
apply the resulting tool over a sufficiently rich variety to enable Carbonite executives to make a choice of direction. I
performed all the tasks, wrote Python code on linux, generated a report, and closed the project in two weeks using TDD
methods, where the resulting code could be updated to include new vendors and new storage paradigms with ease.
Carbonite saved millions of dollars by choosing their new course based on the output of this program.
Carbonite needed to have "monkey testing" (see netflix "chaos monkey") of the new storage platform being developed.
Without a platform ready to test, the needed tool had to be developed pro-actively with reference to the design. I was
assigned the task, and produced a Python automated "traceability matrix" generator on linux to track the response of the
nascent platform to problems, and identify which versions implemented which features of the design. The matrix auto-
populated successfully, and the team was able to track when bugs were introduced, and repair them quickly.
Carbonite had a special coding project that required the expertise of a gifted sysadmin, front-end developer, and backend
developer, and the project was expected to take 3 weeks. I was tasked with the backend role. I threw together a shared-
screen session with linux tools and a skype voice session with the other two contributors, and we shared the development
for about 5 hours, passing the keyboard/mouse/screen responsibilities back-and-forth in remote "Extreme Programming"
style. The project was finished in less than a day with high quality, long before it was due.
Kyruus (Boston, MA) Jan/2012-Mar/2012 software consultant
Kyruus scrapes medical payments/insurance/legal results from public websites, and found that hospital names (amongst
other fields) needed to be converted from a variety of formats (with and without errors) into "canonical" or legal names;
where their existing methods were 18% effective and the remainder were shipped to India for hand canonicalization. I
was tasked with improving the conversion rate. I wrote a new Python canonicalizer which used a multi-algorithm voting
approach to enable voting between Levenshtein, Soundex, NYSSIS, metaphone, FatFinger, acronym, and other analysis
algorithms (new algorithms could be added with ease). After two weeks of development, I delivered the new
canonicalizer at a company meeting, demonstrating that it was already achieving 80% conversion rate.
Kyruus was using SQL to store data scraped from the web, but wanted to use HADOOP. I was tasked with providing a
duck-to-strict Python SQL translator to ingest the data that had already been stored to make it available for re-storage in
HADOOP. The Python I wrote enabled an SQL table schema to be modified for re-use as an ingester for data from the
existing database. In other words, the Python looked exactly like the table description. Kyruus was able to convert from
SQL to HADOOP quickly with this application.
AER (Lexington, MA) Mar/2011-Oct/2011 software consultant
Wrote regression tests for test suite and converted code to government specification for a new environmental satellite.
Zipix (Burlington, MA) Aug/2008-Mar/2009 CTO and architect
Zipix, a startup for which I was a founder, needed new algorithms for improving photographic images. As chief scientist,
it was my job to produce these algorithms. These algorithms were developed, tested, and published as patents. The
company raised $1.2M and launched.
Investment Technology Group (Boston, MA) Feb/2008-Aug/2008 consulting mentor
ITG had an algorithm for ingesting the BATS/Pitch market feed using pattern matching which took 36 hours to read in 24
hours worth of data and failed to handle many data corruption problems. I was tasked with improving this. I wrote an
LR1 lexer using computed GOTO in about two weeks, which matched the formal specifications and successfully
identified and ingested all valid records from the data source. When finished, the new ingester operated on the same 2TB
2
3. JDL
file in 90 minutes, where the limiting speed was reading the data off the disk.
Soapstone (Chelmsford, MA) 2007-2008 software consultant
Soapstone had many developers writing independent apps all of which were to communicate through network links
using IDL (Interface Definition Language). I was tasked with writing the IDL compiler to capture all the data types,
structures, serialization/deserializations of data to be passed between the apps. With my "Extreme Programming" pair
programmer, we produced and maintained a stable reliable, efficient IDL compiler using boost::spirit (C++ extension)
which was used until the company folded.
Bluefin Robotics (Cambridge, MA) 2006-2006 senior software engineer
Bluefin had a contract with the Navy to produce a new BPAUV (Autonomous Underwater Vehicle). I was tasked with
producing all the diagrams and reference manual for this vehicle. These were all produced and delivered on time.
AES (Boston, MA) 2005-2005 software consultant
Wrote BREP to voxel converter for NASA Manned Mission to Mars vehicle high-energy particle simulations.
Lotus/IBM (Cambridge, MA) 1987-2002 principal software engineer
Lotus needed to finish and deliver an update to its METRO/EXPRESS TSR product which had hundreds of documented
bugs. The 40 member team was disbanded but, as a contractor, I was not released and I was kept all by myself working
on the project. I approached a senior vice president and suggested that I could finish the delivery on-time all by myself.
He said "no", but said he would give me a product manager, a project manager, a documenter, and three testers and told
me to "go for it". After about 4 months, I delivered the finished project with zero bugs ready for delivery two weeks early.
The product sold millions of copies which were not marketed or advertised, but merely serviced to customers who
requested it. Although heavily used, no bugs were ever reported again.
Lotus wanted to port their 1-2-3 spreadsheet to UNIX. I was given the task. I had to rewrite the scheduling algorithm
from scratch since the scheduling rules for UNIX and Windows differed considerably. The new spreadsheet
outperformed the Windows native 1-2-3. I passed the project on when I had the opportunity to work on antivirus.
The "brain" virus showed up on a diskette in Lotus. I went, again, to a senior vice president and said Lotus needed an
antivirus department. Lotus assigned the job to an existing manager, and nothing was done for a couple of months. I
went back to ask what was going to be done, and they told me they were waiting for me to offer to do it. I accepted the
task. Over the next 10 years, I developed Lotus antivirus and performed all antivirus related activities for the entire
company. During my tenure, Lotus never shipped a virus to a customer (unlike all other major software companies).
Lotus needed a new "glue layer" to enable its new Lotus 1-2-3 release 3 Windows product to operate on DOS. I accepted
the task. I wrote the glue layer in about 1 week and 1-2-3 subsequently was introduced back into the DOS marketplace.
Lotus products consumed over 100 diskettes when delivered. I was given the task of reducing this count. I proposed and
implemented a new formatting which increased capacity from 1.44MBytes to 1.94MBytes per disk. I presented the result
and showed that it survived implementation in our manufacturing facility and gave a 40% diskette read speed boost. A
vice president chose to kill the project. A month later, Microsoft was delivering its products using this format.
Lotus also wanted to consider compression to reduce the diskette count. I was offered the chance to research this. I
reviewed the existing methods and discovered a new method, where the 7-field Intel instruction could be fragmented into
separate dictionaries based on field. I developed the method and proved that it offered better compression with Huffman
encoding than the best competitive technique did with arithmetic encoding. Although it had promise, it was simply a
research project and never got deployed.
Lotus products were coming back shrink-wrapped from customers, with viruses on them. I was tasked with figuring out
why. I used a signal analyzer to discover that the signal properties were inconsistent with our manufacturing machines.
We finally tracked the problem back to resellers who were re-shrink-wrapping returned disks and sending them back out.
Lotus was able to "appear" and actually be a good actor, proactively helping customers.
Because of the above problem, I developed an entirely new format for diskettes such that known viruses accounting for
99+% of all attacks could not damage Lotus data since those sectors were set aside. I also wrote a mini-operating-system
3
4. JDL
and antivirus that would undo all virus damage on customer machines which would occupy undocumented sectors on
the diskette. Subsequently, when customers accidentally infected a Lotus diskette and then rebooted their machine with
one of these diskettes, my antivirus would kick in and offer to fix their machine. Lotus's customers LOVED this.
IBM needed a faster/cleaner XML 1.0 Unicode 3.0 lexer ingest algorithm to handle all contemporaneous 13 stream types.
I was tasked with researching techniques. I developed a three-tier table-driven lexer and implemented it as a 21 Intel
instruction LR1 ingest routine. The ingest speed exceeded the disk file read/write speed.
Lotus needed a representative "face of antivirus". I wrote articles for the Virus Bulletin, was interviewed on NPR, and sent
white papers to Lotus executives so they could respond sensibly when confronted by questions at conferences.
Ecco Industries, Inc. (Danvers, MA) 1985-1987 Director of Software Engineering
Ecco developed an IBM PC (8088) card for multitasking speaker verification biometrics. They hired me to patch the
existing codebase which had been abandoned by a previous developer. I patched the code. Ecco was able to satisfy their
customers and investors that progress was being made.
Ecco needed the product to be multitasking. They tasked me with identifying how to accomplish this. I reviewed
existing operating systems and found that none could perform the desired functionality. I proposed, and then developed
over a period of several months, a Real-Time Operating-System which could perform as desired. Several extreme
reprogrammings of the IBM PC motherboard chip set were necessary, including bus-usage, DMA chip, interrupt chip and
the rarely-used absolute memory addressing mode to reduce process clock ticks. The company was again capable of
satisfying their customers and investors. I still have a running machine in my possession running this multitasking,
variable-time-slicing, cooperative and preemptive context-switching OS.
New EPROM code was needed to run the NEC DSP and Motorola 68000 chips on the Ecco card. I was tasked with
writing this embedded code, and finished it without subsequent problems arising.
MIT Plasma Fusion Center (Cambridge, MA) 1978-1981 Software Engineering
PFC needed to accelerate tokamak engineering software development. I wrote a LISP program that read FORTRAN style
equations, organized them, read the comments to extract variable documentation, prompted for missing documentation,
sorted the equations to minimize I/O requirements, wrote the I/O statements, compiled the program, ran the program
with data values from the comments, then wrote a TeX paper describing the problem and solution as given in the
comments. This program, the FORSE, decreased engineering development time from 40 weeks to 1½ weeks.
The FORSE ran on costly Multics Emacs (LISP). I wrote a complete compatible LISP Emacs on MIT ITS machine MC
which. This project took 7 days total. The FORSE ran correctly on this new Emacs.
A small sampling of personal pages
4
5. JDL
Independent Projects
My principal personal project is modeling neuron operations and neuron system operations. This project forces me to
learn new languages and new technologies in the pursuit of adequate technology for illustrating my ideas. Some of these
models are shown in the samples (above).
Agile, Extreme, Remote, Waterfall, and all that...
I have read the Agile manifesto (http://agilemanifesto.org/, and https://www.agilealliance.org/agile101/the-agile-
manifesto/) and think it a very well thought out piece. It is useful for many projects. I consider "waterfall" to produce
better software under certain circumstances (complete provable code by design to acquire a Common Criteria EAL7), but
it requires foresight, budget, and long-term commitment to produce. When code just has to provably work (correctness
more important than time-to-market), waterfall produces better code.
Many organizations call themselves "agile" while burdening themselves with "agile tools" to aid managers which end up
soaking up far more than 50% of developer time when a whiteboard with sticky notes would be better. IMHO a more
valuable approach is to assign one agile pair the role of converting whiteboard changes during scrum, to manager tool
content after the fact. Assist the developers to do what developers do. That's what the agile manifesto proposes.
In addition, I have solid experience in remote pair programming where I demonstrated that development quality and
efficiency can be increased by an order of magnitude, leading to higher client satisfaction and reduced maintenance.
I love using remote screen/keyboard/mouse sharing like tmux, join.me, google hangouts, and audio/video conferencing
to reduce the wasted time and stress of commuting and increase productivity by ready access to personal books and
devices.
Education
MIT BS Physics. 1971-1976
Developed novel parabolic mirror for thesis.
University of Texas Health Science Center at Dallas. 1976-1977
Unfinished degree in Medicine/Biophysics
5
http://jonathan.lettvin.com Web presence
http://www.linkedin.com/in/jonathanlettvin Web presence
https://github.com/jlettvin github repository
http://lettvin.com/Jonathan/wiki/index.php/Writing_Samples unorganized points of interest
https://github.com/jlettvin/code_quality Python illustration of quality techniques
https://github.com/jdl-mit-alum/code-quality C++ and Python: more code quality techniques
https://github.com/jlettvin/UnsignedLongLongLexer C++ for high-speed unit-tested atoi converter
https://github.com/jlettvin/RomanNumerals Python with exhaustive unit-testing
https://rawgit.com/jlettvin/JYL/master/seen.movement.html HTML5/javascript scientific paper annotation
http://rote.training Minimalistic web page dev with markdown
http://rawgit.com/jlettvin/Tubulin/master/bipolar.unique.path.html Three.js 3D model of retinal bipolar tubulin
https://rawgit.com/jlettvin/haiku/master/index.html Love for Unicode
6. JDL
Independent Projects
My principal personal project is modeling neuron operations and neuron system operations. This project forces me to
learn new languages and new technologies in the pursuit of adequate technology for illustrating my ideas. Some of these
models are shown in the samples (above).
Agile, Extreme, Remote, Waterfall, and all that...
I have read the Agile manifesto (http://agilemanifesto.org/, and https://www.agilealliance.org/agile101/the-agile-
manifesto/) and think it a very well thought out piece. It is useful for many projects. I consider "waterfall" to produce
better software under certain circumstances (complete provable code by design to acquire a Common Criteria EAL7), but
it requires foresight, budget, and long-term commitment to produce. When code just has to provably work (correctness
more important than time-to-market), waterfall produces better code.
Many organizations call themselves "agile" while burdening themselves with "agile tools" to aid managers which end up
soaking up far more than 50% of developer time when a whiteboard with sticky notes would be better. IMHO a more
valuable approach is to assign one agile pair the role of converting whiteboard changes during scrum, to manager tool
content after the fact. Assist the developers to do what developers do. That's what the agile manifesto proposes.
In addition, I have solid experience in remote pair programming where I demonstrated that development quality and
efficiency can be increased by an order of magnitude, leading to higher client satisfaction and reduced maintenance.
I love using remote screen/keyboard/mouse sharing like tmux, join.me, google hangouts, and audio/video conferencing
to reduce the wasted time and stress of commuting and increase productivity by ready access to personal books and
devices.
Education
MIT BS Physics. 1971-1976
Developed novel parabolic mirror for thesis.
University of Texas Health Science Center at Dallas. 1976-1977
Unfinished degree in Medicine/Biophysics
5
http://jonathan.lettvin.com Web presence
http://www.linkedin.com/in/jonathanlettvin Web presence
https://github.com/jlettvin github repository
http://lettvin.com/Jonathan/wiki/index.php/Writing_Samples unorganized points of interest
https://github.com/jlettvin/code_quality Python illustration of quality techniques
https://github.com/jdl-mit-alum/code-quality C++ and Python: more code quality techniques
https://github.com/jlettvin/UnsignedLongLongLexer C++ for high-speed unit-tested atoi converter
https://github.com/jlettvin/RomanNumerals Python with exhaustive unit-testing
https://rawgit.com/jlettvin/JYL/master/seen.movement.html HTML5/javascript scientific paper annotation
http://rote.training Minimalistic web page dev with markdown
http://rawgit.com/jlettvin/Tubulin/master/bipolar.unique.path.html Three.js 3D model of retinal bipolar tubulin
https://rawgit.com/jlettvin/haiku/master/index.html Love for Unicode