The document outlines data structures and algorithms, including analysis of complexity, common data structures like arrays, stacks, queues, linked lists, and sorting algorithms like merge sort and quick sort. It provides an overview of these topics along with examples of analyzing time complexity using Big-O notation.
Stacks follow the LIFO (last in, first out) principle. They are commonly implemented using arrays, where elements are pushed and popped from one end of the array to enforce the LIFO behavior. This overrides the random access of regular arrays. Common stack operations include push to add an element, pop to remove the top element, peek to access the top element without removing it, and checks for empty or full stacks. Stacks have many applications like function calls, undo/redo operations, parsing expressions etc.
The document summarizes the Deuce software transactional memory (STM) framework for Java. Deuce allows developers to add concurrency to Java applications using atomic blocks without changing code or using reserved keywords. It works by dynamically instrumenting bytecode to enable software transactions over shared fields. Benchmarks show it scales well on multi-core systems compared to other STM approaches like TL2 and LSA that require more intrusive changes.
This document discusses algorithms complexity and data structures efficiency. It covers topics like time and memory complexity, asymptotic notation, fundamental data structures like arrays, lists, trees and hash tables, and choosing proper data structures. Computational complexity is important for algorithm design and efficient programming. The document provides examples of analyzing complexity for different algorithms.
The document discusses Java methods. It defines a method as a collection of statements grouped to perform an operation. A method signature combines the method name and parameter list. Formal parameters are defined in the method header, while actual parameters are the values passed when invoking the method. The document provides an example method that returns the maximum of two integers.
ByteCode 2012 Talk: Quantitative analysis of Java/.Net like programs to under...garbervetsky
There is an increasing interest in understanding and analyzing the use of resources in software and hardware systems. Certifying memory consumption is vital to ensure safety in embedded systems as well as proper administration of their power consumption; understanding the number of messages sent through a network is useful to detect performance bottlenecks or reduce communication costs, etc. Assessing resource usage is indeed a cornerstone in a wide variety of software-intensive system ranging from embedded to Cloud computing. It is well known that inferring, and even checking, quantitative bounds is difficult (actually undecidable). Memory consumption is a particularly challenging case of resource-usage analysis due to its non-accumulative nature. Inferring memory consumption requires not only computing bounds for allocations but also taking into account the memory recovered by a GC. In this talk I will present some of the work our group have been performing in order to automatically analyze heap memory requirements. In particular, I will show some basic ideas which are core to our techniques and how they were applied to different problems, ranging from inferring sizes of memory regions in real-time Java to analyzing heap memory requirements in Java/.Net. Then, I will introduce our new compositional approach which is used to analyze (infer/verify) Java and .Net programs. Finally, I will explain some limitations of our approach and discuss some key challenges and directions for future research.
The document presents a two-level approach for solving stochastic planning problems in operating rooms. At the first level, a deterministic model is used to allocate block times to specialties. At the second level, a stochastic model incorporates random durations to determine if solutions are feasible with high probability. Safety slacks are calculated for blocks likely to exceed durations and fed back into the deterministic model in an iterative process until a robust solution is found. Monte Carlo simulation and the Fenton-Wilkinson approximation are also discussed to model lognormal durations. The approach is applied preliminarily to an operating room case study.
The document discusses deep feedforward networks, also known as multilayer perceptrons. It begins with an introduction to feedforward networks, which apply vector-to-vector functions across multiple hidden layers without feedback connections between layers. Each hidden layer consists of units that resemble neurons. The document then covers gradient-based learning, different cost functions, types of output and hidden units like ReLU, and considerations for network architecture such as depth, width, and universal approximation properties.
Kernel Entropy Component Analysis in Remote Sensing Data Clustering.pdfgrssieee
This document presents Kernel Entropy Component Analysis (KECA) for nonlinear dimensionality reduction and spectral clustering in remote sensing data. KECA extends Entropy Component Analysis (ECA) to kernel spaces to capture nonlinear feature relations. It works by maximizing the entropy of data projections while preserving between-cluster divergence. The paper describes KECA methodology, including kernel entropy estimation, nonlinear transformation to feature space, and spectral clustering based on Cauchy-Schwarz divergence between cluster means. Experimental results on cloud screening from MERIS satellite images show KECA outperforms k-means clustering, KPCA dimensionality reduction followed by k-means, and kernel k-means.
Stacks follow the LIFO (last in, first out) principle. They are commonly implemented using arrays, where elements are pushed and popped from one end of the array to enforce the LIFO behavior. This overrides the random access of regular arrays. Common stack operations include push to add an element, pop to remove the top element, peek to access the top element without removing it, and checks for empty or full stacks. Stacks have many applications like function calls, undo/redo operations, parsing expressions etc.
The document summarizes the Deuce software transactional memory (STM) framework for Java. Deuce allows developers to add concurrency to Java applications using atomic blocks without changing code or using reserved keywords. It works by dynamically instrumenting bytecode to enable software transactions over shared fields. Benchmarks show it scales well on multi-core systems compared to other STM approaches like TL2 and LSA that require more intrusive changes.
This document discusses algorithms complexity and data structures efficiency. It covers topics like time and memory complexity, asymptotic notation, fundamental data structures like arrays, lists, trees and hash tables, and choosing proper data structures. Computational complexity is important for algorithm design and efficient programming. The document provides examples of analyzing complexity for different algorithms.
The document discusses Java methods. It defines a method as a collection of statements grouped to perform an operation. A method signature combines the method name and parameter list. Formal parameters are defined in the method header, while actual parameters are the values passed when invoking the method. The document provides an example method that returns the maximum of two integers.
ByteCode 2012 Talk: Quantitative analysis of Java/.Net like programs to under...garbervetsky
There is an increasing interest in understanding and analyzing the use of resources in software and hardware systems. Certifying memory consumption is vital to ensure safety in embedded systems as well as proper administration of their power consumption; understanding the number of messages sent through a network is useful to detect performance bottlenecks or reduce communication costs, etc. Assessing resource usage is indeed a cornerstone in a wide variety of software-intensive system ranging from embedded to Cloud computing. It is well known that inferring, and even checking, quantitative bounds is difficult (actually undecidable). Memory consumption is a particularly challenging case of resource-usage analysis due to its non-accumulative nature. Inferring memory consumption requires not only computing bounds for allocations but also taking into account the memory recovered by a GC. In this talk I will present some of the work our group have been performing in order to automatically analyze heap memory requirements. In particular, I will show some basic ideas which are core to our techniques and how they were applied to different problems, ranging from inferring sizes of memory regions in real-time Java to analyzing heap memory requirements in Java/.Net. Then, I will introduce our new compositional approach which is used to analyze (infer/verify) Java and .Net programs. Finally, I will explain some limitations of our approach and discuss some key challenges and directions for future research.
The document presents a two-level approach for solving stochastic planning problems in operating rooms. At the first level, a deterministic model is used to allocate block times to specialties. At the second level, a stochastic model incorporates random durations to determine if solutions are feasible with high probability. Safety slacks are calculated for blocks likely to exceed durations and fed back into the deterministic model in an iterative process until a robust solution is found. Monte Carlo simulation and the Fenton-Wilkinson approximation are also discussed to model lognormal durations. The approach is applied preliminarily to an operating room case study.
The document discusses deep feedforward networks, also known as multilayer perceptrons. It begins with an introduction to feedforward networks, which apply vector-to-vector functions across multiple hidden layers without feedback connections between layers. Each hidden layer consists of units that resemble neurons. The document then covers gradient-based learning, different cost functions, types of output and hidden units like ReLU, and considerations for network architecture such as depth, width, and universal approximation properties.
Kernel Entropy Component Analysis in Remote Sensing Data Clustering.pdfgrssieee
This document presents Kernel Entropy Component Analysis (KECA) for nonlinear dimensionality reduction and spectral clustering in remote sensing data. KECA extends Entropy Component Analysis (ECA) to kernel spaces to capture nonlinear feature relations. It works by maximizing the entropy of data projections while preserving between-cluster divergence. The paper describes KECA methodology, including kernel entropy estimation, nonlinear transformation to feature space, and spectral clustering based on Cauchy-Schwarz divergence between cluster means. Experimental results on cloud screening from MERIS satellite images show KECA outperforms k-means clustering, KPCA dimensionality reduction followed by k-means, and kernel k-means.
Regularization is used in deep learning to reduce generalization error by modifying the learning algorithm. Common regularization techniques for deep neural networks include:
1) Parameter norm penalties like L2 and L1 regularization that penalize the weights of a network. This encourages simpler models that generalize better.
2) Early stopping which obtains the model parameters at the point of lowest validation error during training, rather than at the end of training.
3) Data augmentation which creates additional fake training data through techniques like translation to improve robustness.
Prévision consommation électrique par processus à valeurs fonctionnellesCdiscount
The document discusses predicting functional time series. It introduces functional data as slices of a continuous stochastic process over time. The goal is to predict future values of this process given past observations. An autoregressive Hilbertian process of order 1 is proposed, where the current value is a linear function of the past value plus noise. Under certain conditions, this defines a strictly stationary process whose best predictor is the past value multiplied by the linear operator. The document outlines clustering functional data using wavelets and applying these methods to predict electricity demand.
Cosmological Perturbations and Numerical SimulationsIan Huston
Talk given at Queen Mary, University of London in March 2010.
Cosmological perturbation theory is well established as a tool for
probing the inhomogeneities of the early universe.
In this talk I will motivate the use of perturbation theory and
outline the mathematical formalism. Perturbations beyond linear order
are especially interesting as non-Gaussian effects can be used to
constrain inflationary models.
I will show how the Klein-Gordon equation at second order, written in
terms of scalar field variations only, can be numerically solved.
The slow roll version of the second order source term is used and the
method is shown to be extendable to the full equation. This procedure
allows the evolution of second order perturbations in general and the
calculation of the non-Gaussianity parameter in cases where there is
no analytical solution available.
If you've tried Apache Solr 1.4, you've probably had a chance to take it for a spin indexing and searching your data, and getting acquainted with its powerful, versatile new features and functions. Now, it's time to roll up your sleeves and really master what Solr 1.4 has to offer.
Local variables are stored in memory during the execution of a function. When the function exits, the memory for local variables is deallocated. Local variables provide convenient temporary storage but have a short lifetime and cannot persist beyond the function call. Returning a pointer to a local variable from a function results in a dangling pointer as the memory is deallocated when the function exits.
Querying UML Class Diagrams - FoSSaCS 2012Giorgio Orsi
UML Class Diagrams (UCDs) are the best known class-based formalism for conceptual modeling. They are used by software engineers to model the intensional structure of a system in terms of classes, attributes and operations, and to express constraints that must hold for every instance of the system. Reasoning over UCDs is of paramount importance in design, validation, maintenance and system analysis; however, for medium and large software projects, reasoning over UCDs may be impractical. Query answering, in particular, can be used to verify whether a (possibly incomplete) instance of the system modeled by the UCD, i.e., a snapshot, enjoys a certain property. In this work, we study the problem of querying UCD instances, and we relate it to query answering under guarded Datalog +/-, that is, a powerful Datalog-based language for ontological modeling. We present an expressive and meaningful class of UCDs, named UCDLog, under which conjunctive query answering is tractable in the size of the instances.
Here are the key steps to perform a deep copy in the copy constructor:
1. Allocate new memory for the target object's pointer attribute using new.
2. Loop through the source object's pointer attribute array and copy each element to the target's array.
3. The target object now has its own independently allocated copy of the pointer attribute array.
This avoids the target object sharing/pointing to the source object's pointer attribute memory.
A copy constructor implementing deep copy for the Student class pointer attribute could be:
Student(const Student& s) {
size = s.size;
marks = new double[size];
for(int i=0; i<size;
- Java is a platform independent programming language that is similar to C++ in syntax but similar to Smalltalk in its object-oriented approach. It provides features like automatic memory management, security, and multi-threading capabilities.
- Java code is compiled to bytecode that can run on any Java Virtual Machine (JVM). Only depending on the JVM allows Java code to run on any hardware or operating system with a JVM.
- Java supports object-oriented programming concepts like inheritance, polymorphism, and encapsulation. Classes can contain methods and instance variables. Methods perform actions and can return values.
- Java is a platform independent programming language that is similar to C++ in syntax but similar to Smalltalk in its object-oriented approach. It provides features like automatic memory management, security, and multi-threading capabilities.
- Java code is compiled to bytecode that can run on any Java Virtual Machine (JVM). The JVM then interprets the bytecode and may perform just-in-time (JIT) compilation for improved performance. This allows Java programs to run on any platform with a JVM.
- Java supports object-oriented programming principles like encapsulation, inheritance, and polymorphism. Classes can contain methods and instance variables. Methods can be called on objects to perform operations or retrieve data.
Kenneth presented a summary of his participation in the Kaggle TGS Salt Identification Challenge. He discussed the top solutions which utilized techniques like ResNeXt and ResNet backbones, feature pyramid attention, Lovasz loss, and test time augmentation. Kenneth's own solution ranked 1482nd, using a U-Net architecture with Lovasz loss and test time flipping. He analyzed techniques that did and did not work well for this salt segmentation task from radar images.
This document provides an overview of various data structures available in the PHP Standard Library (SPL). It discusses DoublyLinkedList, Stack, Queue, Heap, MaxHeap, MinHeap, PriorityQueue, FixedArray, and ObjectStorage. For each data structure, it describes their implementation and time complexity for common operations like insertion, deletion, lookup, etc. The document aims to help understand which data structure to use for different use cases based on their performance characteristics.
The document discusses core Java programming concepts like data types, variables, wrapper classes, and methods. It provides details on declaring and initializing variables, object reference variables, and the main method. The document also includes frequently asked questions about Java concepts and concludes with an assignment on string operations.
The document describes using threshold-based agent models to optimize plant placement in a landscape. It proposes an agent-based algorithm where individual "plants" search the landscape for optimal locations based on their light and water requirements. A genetic algorithm approach is also mentioned. The goal is to maximize overall plant growth by finding placements where each plant meets a 70% threshold of its ideal growth conditions. Future work could include formal analysis and comparisons to determine how well the approach works at finding the optimal plant collection for a given landscape.
M Gumbel - SCABIO: a framework for bioinformatics algorithms in ScalaJan Aerts
SCABIO is a framework for bioinformatics algorithms written in Scala. It was originally developed in 2010 for education purposes and contains mainly standard algorithms. The framework uses a dynamic programming approach and allows bioinformatics algorithms to be implemented in a concise way. It also enables integration with other Java frameworks like BioJava. The source code is open source and available on GitHub under an Apache license.
Here we are going to take a look how to use for loop, foreach loop and while loop. Also we are going to learn how to use and invoke methods and how to define classes in Java programming language.
This will address two recently concluded Kaggle competitions.
1. Google landmark retrieval
2. Google landmark recognition
The talk would focus on image retrieval and recognition in large scale. The tentative plan for the presentation:
Primer on signal analysis (DFT, Wavelets).
Primer on information retrieval.
Tips for parallelizing your data pipeline.
Description of my approach and detailed discussion of bottlenecks, limitations and lessons.
In-depth analysis of winning solutions.
This will be a combination of theoretical rigor and practical implementation.
This document summarizes a presentation on augmenting descriptors for fine-grained visual categorization using polynomial embedding. The presentation introduces polynomial embedding as a method to exploit co-occurrence information between neighboring local descriptors. Polynomial embedding compresses polynomials of neighboring local feature vectors with supervised dimensionality reduction to obtain discriminative latent descriptors. Experiments on fine-grained categorization datasets show that polynomial embedding improves classification accuracy over baselines and state-of-the-art methods. However, the method is less effective for object and scene categorization problems.
The document discusses using neural networks to accelerate general purpose programs through approximate computing. It describes generating training data from programs, using this data to train neural networks, and then running the neural networks at runtime instead of the original programs. Experimental results show the neural network implementations provided speedups of 10-900% compared to the original programs with minimal loss of accuracy. An FPGA implementation of the neural networks was also able to achieve further acceleration, running a network 4x faster than software.
how to obtain animated slides in slideshare from power point without much work or hassle. The idea is simple, automatically break each animation into a separate slide using a free software and upload the new slides into slideshare.
Dave is an entrepreneur and startup mentor who shares his ideas through presentations on SlideShare. He discovered SlideShare after reading about it on TechCrunch and found it was a great way to share his presentations online. Dave became an advisor to SlideShare and later invested in the company. SlideShare now has new offices in San Francisco as it works to help more people like Dave share presentations online.
Ashfield Healthcare provides full-service advisory boards to pharmaceutical companies. They apply a consultative "Kinetic" approach to advisory boards that involves challenging objectives, bringing the right advisors, expert facilitation using interactive tools, and producing actionable outputs. Their methodology aims to make advisory boards strategic cornerstones that shape the client's journey. They highlight case studies where their approach led to engaged discussions, consensus on tangible plans, and implementation of agreed actions.
Regularization is used in deep learning to reduce generalization error by modifying the learning algorithm. Common regularization techniques for deep neural networks include:
1) Parameter norm penalties like L2 and L1 regularization that penalize the weights of a network. This encourages simpler models that generalize better.
2) Early stopping which obtains the model parameters at the point of lowest validation error during training, rather than at the end of training.
3) Data augmentation which creates additional fake training data through techniques like translation to improve robustness.
Prévision consommation électrique par processus à valeurs fonctionnellesCdiscount
The document discusses predicting functional time series. It introduces functional data as slices of a continuous stochastic process over time. The goal is to predict future values of this process given past observations. An autoregressive Hilbertian process of order 1 is proposed, where the current value is a linear function of the past value plus noise. Under certain conditions, this defines a strictly stationary process whose best predictor is the past value multiplied by the linear operator. The document outlines clustering functional data using wavelets and applying these methods to predict electricity demand.
Cosmological Perturbations and Numerical SimulationsIan Huston
Talk given at Queen Mary, University of London in March 2010.
Cosmological perturbation theory is well established as a tool for
probing the inhomogeneities of the early universe.
In this talk I will motivate the use of perturbation theory and
outline the mathematical formalism. Perturbations beyond linear order
are especially interesting as non-Gaussian effects can be used to
constrain inflationary models.
I will show how the Klein-Gordon equation at second order, written in
terms of scalar field variations only, can be numerically solved.
The slow roll version of the second order source term is used and the
method is shown to be extendable to the full equation. This procedure
allows the evolution of second order perturbations in general and the
calculation of the non-Gaussianity parameter in cases where there is
no analytical solution available.
If you've tried Apache Solr 1.4, you've probably had a chance to take it for a spin indexing and searching your data, and getting acquainted with its powerful, versatile new features and functions. Now, it's time to roll up your sleeves and really master what Solr 1.4 has to offer.
Local variables are stored in memory during the execution of a function. When the function exits, the memory for local variables is deallocated. Local variables provide convenient temporary storage but have a short lifetime and cannot persist beyond the function call. Returning a pointer to a local variable from a function results in a dangling pointer as the memory is deallocated when the function exits.
Querying UML Class Diagrams - FoSSaCS 2012Giorgio Orsi
UML Class Diagrams (UCDs) are the best known class-based formalism for conceptual modeling. They are used by software engineers to model the intensional structure of a system in terms of classes, attributes and operations, and to express constraints that must hold for every instance of the system. Reasoning over UCDs is of paramount importance in design, validation, maintenance and system analysis; however, for medium and large software projects, reasoning over UCDs may be impractical. Query answering, in particular, can be used to verify whether a (possibly incomplete) instance of the system modeled by the UCD, i.e., a snapshot, enjoys a certain property. In this work, we study the problem of querying UCD instances, and we relate it to query answering under guarded Datalog +/-, that is, a powerful Datalog-based language for ontological modeling. We present an expressive and meaningful class of UCDs, named UCDLog, under which conjunctive query answering is tractable in the size of the instances.
Here are the key steps to perform a deep copy in the copy constructor:
1. Allocate new memory for the target object's pointer attribute using new.
2. Loop through the source object's pointer attribute array and copy each element to the target's array.
3. The target object now has its own independently allocated copy of the pointer attribute array.
This avoids the target object sharing/pointing to the source object's pointer attribute memory.
A copy constructor implementing deep copy for the Student class pointer attribute could be:
Student(const Student& s) {
size = s.size;
marks = new double[size];
for(int i=0; i<size;
- Java is a platform independent programming language that is similar to C++ in syntax but similar to Smalltalk in its object-oriented approach. It provides features like automatic memory management, security, and multi-threading capabilities.
- Java code is compiled to bytecode that can run on any Java Virtual Machine (JVM). Only depending on the JVM allows Java code to run on any hardware or operating system with a JVM.
- Java supports object-oriented programming concepts like inheritance, polymorphism, and encapsulation. Classes can contain methods and instance variables. Methods perform actions and can return values.
- Java is a platform independent programming language that is similar to C++ in syntax but similar to Smalltalk in its object-oriented approach. It provides features like automatic memory management, security, and multi-threading capabilities.
- Java code is compiled to bytecode that can run on any Java Virtual Machine (JVM). The JVM then interprets the bytecode and may perform just-in-time (JIT) compilation for improved performance. This allows Java programs to run on any platform with a JVM.
- Java supports object-oriented programming principles like encapsulation, inheritance, and polymorphism. Classes can contain methods and instance variables. Methods can be called on objects to perform operations or retrieve data.
Kenneth presented a summary of his participation in the Kaggle TGS Salt Identification Challenge. He discussed the top solutions which utilized techniques like ResNeXt and ResNet backbones, feature pyramid attention, Lovasz loss, and test time augmentation. Kenneth's own solution ranked 1482nd, using a U-Net architecture with Lovasz loss and test time flipping. He analyzed techniques that did and did not work well for this salt segmentation task from radar images.
This document provides an overview of various data structures available in the PHP Standard Library (SPL). It discusses DoublyLinkedList, Stack, Queue, Heap, MaxHeap, MinHeap, PriorityQueue, FixedArray, and ObjectStorage. For each data structure, it describes their implementation and time complexity for common operations like insertion, deletion, lookup, etc. The document aims to help understand which data structure to use for different use cases based on their performance characteristics.
The document discusses core Java programming concepts like data types, variables, wrapper classes, and methods. It provides details on declaring and initializing variables, object reference variables, and the main method. The document also includes frequently asked questions about Java concepts and concludes with an assignment on string operations.
The document describes using threshold-based agent models to optimize plant placement in a landscape. It proposes an agent-based algorithm where individual "plants" search the landscape for optimal locations based on their light and water requirements. A genetic algorithm approach is also mentioned. The goal is to maximize overall plant growth by finding placements where each plant meets a 70% threshold of its ideal growth conditions. Future work could include formal analysis and comparisons to determine how well the approach works at finding the optimal plant collection for a given landscape.
M Gumbel - SCABIO: a framework for bioinformatics algorithms in ScalaJan Aerts
SCABIO is a framework for bioinformatics algorithms written in Scala. It was originally developed in 2010 for education purposes and contains mainly standard algorithms. The framework uses a dynamic programming approach and allows bioinformatics algorithms to be implemented in a concise way. It also enables integration with other Java frameworks like BioJava. The source code is open source and available on GitHub under an Apache license.
Here we are going to take a look how to use for loop, foreach loop and while loop. Also we are going to learn how to use and invoke methods and how to define classes in Java programming language.
This will address two recently concluded Kaggle competitions.
1. Google landmark retrieval
2. Google landmark recognition
The talk would focus on image retrieval and recognition in large scale. The tentative plan for the presentation:
Primer on signal analysis (DFT, Wavelets).
Primer on information retrieval.
Tips for parallelizing your data pipeline.
Description of my approach and detailed discussion of bottlenecks, limitations and lessons.
In-depth analysis of winning solutions.
This will be a combination of theoretical rigor and practical implementation.
This document summarizes a presentation on augmenting descriptors for fine-grained visual categorization using polynomial embedding. The presentation introduces polynomial embedding as a method to exploit co-occurrence information between neighboring local descriptors. Polynomial embedding compresses polynomials of neighboring local feature vectors with supervised dimensionality reduction to obtain discriminative latent descriptors. Experiments on fine-grained categorization datasets show that polynomial embedding improves classification accuracy over baselines and state-of-the-art methods. However, the method is less effective for object and scene categorization problems.
The document discusses using neural networks to accelerate general purpose programs through approximate computing. It describes generating training data from programs, using this data to train neural networks, and then running the neural networks at runtime instead of the original programs. Experimental results show the neural network implementations provided speedups of 10-900% compared to the original programs with minimal loss of accuracy. An FPGA implementation of the neural networks was also able to achieve further acceleration, running a network 4x faster than software.
how to obtain animated slides in slideshare from power point without much work or hassle. The idea is simple, automatically break each animation into a separate slide using a free software and upload the new slides into slideshare.
Dave is an entrepreneur and startup mentor who shares his ideas through presentations on SlideShare. He discovered SlideShare after reading about it on TechCrunch and found it was a great way to share his presentations online. Dave became an advisor to SlideShare and later invested in the company. SlideShare now has new offices in San Francisco as it works to help more people like Dave share presentations online.
Ashfield Healthcare provides full-service advisory boards to pharmaceutical companies. They apply a consultative "Kinetic" approach to advisory boards that involves challenging objectives, bringing the right advisors, expert facilitation using interactive tools, and producing actionable outputs. Their methodology aims to make advisory boards strategic cornerstones that shape the client's journey. They highlight case studies where their approach led to engaged discussions, consensus on tangible plans, and implementation of agreed actions.
Ashfield Head of Clinical – Europe, Nagore Fernandez, presented at eyeforpharma 2017, sharing learnings from Ashfield’s 15 years of experience delivering patient support services. The presentation covers how to design, deliver and measure a truly differentiated patient support programme, as well as practical do’s and don’ts for success.
#e4pbarca #unitepharma, adherence, behaviour change, eyeforpharma 2017, patient enrolment, patient outcomes, patient support programmes, psp,
Introdução discute os seres vivos autotróficos e suas fontes de energia. Experiência de Engelmann mostra que a clorofila absorve melhor as cores azul e vermelho, indicando onde a taxa fotossintética é maior. A fotossíntese inclui fases dependentes e independentes da luz, com esta última envolvendo o ciclo de Calvin para fixar o carbono em compostos orgânicos.
Computer animation uses computer graphics to generate animated images. Modern animation typically uses 3D graphics, though 2D is still used for stylistic or faster rendering. Developments in computer graphics are presented annually at SIGGRAPH, where developers strive to achieve film-quality CGI in real-time on personal computers. One early use of computer animation was in the 1973 film Westworld, while the first 3D wireframe imagery was in its 1976 sequel featuring a computer-generated hand and face.
The document discusses different types of assistance that could be offered as part of an assistance program, including assistance for auto, household, trips, demise, basic needs, schools, discounts on drugs, women's/maternity needs, quality of life/nutrition, second medical opinions, condominiums, motorcycles, businesses, home checkups, pets, computer help, seniors, young people/university students, and crime victims. It shows the existing types of assistance and proposes including new types of assistance. Many insurers already bundle home coverage into their auto insurance plans.
The document discusses adopting a customer-focused mindset when designing medical affairs programs to create more engaging communications in the "Age of Personalization". It notes that attention spans are shrinking while information exposure is greater. Healthcare professionals feel pressure from inefficient systems and data overload. Effective communications need to gain a deeper understanding of audiences, create personalized communication plans, and develop personalized content. This involves understanding barriers/drivers, tailoring content to individuals based on traits and dynamic data, and adapting an educational system based on individual progress.
Each month, join us as we highlight and discuss hot topics ranging from the future of higher education to wearable technology, best productivity hacks and secrets to hiring top talent. Upload your SlideShares, and share your expertise with the world!
Not sure what to share on SlideShare?
SlideShares that inform, inspire and educate attract the most views. Beyond that, ideas for what you can upload are limitless. We’ve selected a few popular examples to get your creative juices flowing.
SlideShare is a global platform for sharing presentations, infographics, videos and documents. It has over 18 million pieces of professional content uploaded by experts like Eric Schmidt and Guy Kawasaki. The document provides tips for setting up an account on SlideShare, uploading content, optimizing it for searchability, and sharing it on social media to build an audience and reputation as a subject matter expert.
In topological inference, the goal is to extract information about a shape, given only a sample of points from it. There are many approaches to this problem, but the one we focus on is persistent homology. We get a view of the data at different scales by imagining the points are balls and consider different radii. The shape information we want comes in the form of a persistence diagram, which describes the components, cycles, bubbles, etc in the space that persist over a range of different scales.
To actually compute a persistence diagram in the geometric setting, previous work required complexes of size n^O(d). We reduce this complexity to O(n) (hiding some large constants depending on d) by using ideas from mesh generation.
This talk will not assume any knowledge of topology. This is joint work with Gary Miller, Benoit Hudson, and Steve Oudot.
The document describes Threp, a lightweight remapping framework for use in Earth system models. Threp aims to provide a flexible, readable, and efficient framework for remapping data between different grid types, including regular, rectilinear, curvilinear, and unstructured grids. It supports operations like interpolation, masking, and extrapolation between source and destination grids. Threp uses a two-stage process of first generating interpolation weights and then applying those weights to remap data values. It is designed for parallel computation and to be easily extensible to support new interpolation methods and grid types.
2021 03-01-on the relationship between self-attention and convolutional layersJAEMINJEONG5
1) This document provides theoretical and empirical evidence that self-attention layers can learn behaviors similar to convolutional layers.
2) It presents a constructive proof showing that self-attention layers can express any convolutional layer. Experiments show attention layers learn grid-like patterns around query pixels like convolutions.
3) A single multi-head self-attention layer using relative positional encoding can parametrize any convolutional layer.
The document discusses data structures and algorithms. It defines key concepts like primitive data types, data structures, static vs dynamic structures, abstract data types, algorithm design, analysis of time and space complexity, recursion, stacks and common stack operations like push and pop. Examples are provided to illustrate factorial calculation using recursion and implementation of a stack.
The document discusses data structures and algorithms. It defines key concepts like primitive data types, data structures, static vs dynamic structures, abstract data types, algorithm design, analysis of time and space complexity, and recursion. It provides examples of algorithms and data structures like stacks and using recursion to calculate factorials. The document covers fundamental topics in data structures and algorithms.
The document discusses data structures and algorithms. It defines key concepts like primitive data types, data structures, static vs dynamic structures, abstract data types, algorithm design, analysis of time and space complexity, and recursion. It provides examples of algorithms and data structures like stacks and using recursion to calculate factorials. The document covers fundamental topics in data structures and algorithms.
The document discusses data structures and algorithms. It defines key concepts like primitive data types, data structures, static vs dynamic structures, abstract data types, algorithm design, analysis of time and space complexity, and recursion. It provides examples of algorithms and data structures like arrays, stacks and the factorial function to illustrate recursive and iterative implementations. Problem solving techniques like defining the problem, designing algorithms, analyzing and testing solutions are also covered.
The document discusses data structures and algorithms. It defines key concepts like primitive data types, data structures, static vs dynamic structures, abstract data types, algorithm analysis including time and space complexity, and common algorithm design techniques like recursion. It provides examples of algorithms and data structures like stacks and using recursion to calculate factorials. The document covers fundamental topics in data structures and algorithms.
This document discusses data structures and algorithms. It begins by defining data structures as the logical organization of data and primitive data types like integers that hold single pieces of data. It then discusses static versus dynamic data structures and abstract data types. The document outlines the main steps in problem solving as defining the problem, designing algorithms, analyzing algorithms, implementing, testing, and maintaining solutions. It provides examples of space and time complexity analysis and discusses analyzing recursive algorithms through repeated substitution and telescoping methods.
[PR12] PR-036 Learning to Remember Rare EventsTaegyun Jeon
This document summarizes a paper on learning to remember rare events using a memory-augmented neural network. The paper proposes a memory module that stores examples from previous tasks to help learn new rare tasks from only a single example. The memory module is trained end-to-end with the neural network on two tasks: one-shot learning on Omniglot characters and machine translation of rare words. The implementation uses a TensorFlow memory module that stores key-value pairs to retrieve examples similar to a query. Experiments show the memory module improves one-shot learning performance and handles rare words better than baselines.
Abstract : For many years, Machine Learning has focused on a key issue: the design of input features to solve prediction tasks. In this presentation, we show that many learning tasks from structured output prediction to zero-shot learning can benefit from an appropriate design of output features, broadening the scope of regression. As an illustration, I will briefly review different examples and recent results obtained in my team.
Structured regression for efficient object detectionzukun
This document summarizes research on structured regression for efficient object detection. It proposes framing object localization as a structured output regression problem rather than a classification problem. This involves learning a function that maps images directly to object bounding boxes. It describes using a structured support vector machine with joint image/box kernels and box overlap loss to learn this mapping from training data. The document also outlines techniques for efficiently solving the resulting argmax problem using branch-and-bound optimization and discusses extensions to other tasks like image segmentation.
This document discusses analysis fundamentals for measuring algorithm efficiency. It defines asymptotic analysis and big O notation for describing how a function's runtime grows relative to the input size. Common time complexities like constant, logarithmic, linear, quadratic, and exponential are explained. Examples are given to show how problem sizes and runtimes scale based on these complexity classes when the input or computer speed changes.
This document provides an overview of object-oriented programming concepts using C++. It discusses key OOP concepts like objects, classes, encapsulation, inheritance, polymorphism, and dynamic binding. It also covers C++ specific topics like functions, arrays, strings, modular programming, and classes and objects in C++. The document is intended to introduce the reader to the fundamentals of OOP using C++.
This document provides an overview of instance-based learning and k-nearest neighbors (kNN) classification. It discusses how kNN works by storing all training examples and classifying new instances based on the majority class of the k nearest neighbors. It covers selecting k, different distance functions, variants like distance-weighted and attribute-weighted kNN, and the strengths and weaknesses of the approach. The next class will discuss case-based reasoning and learning distance functions and prototypes.
Mining Adaptively Frequent Closed Unlabeled Rooted Trees in Data StreamsAlbert Bifet
This document discusses mining frequent closed unlabeled rooted trees in data streams. It introduces the problem of finding frequent closed trees in a data stream of unlabeled rooted trees. It describes some of the challenges of data streams, including that the sequence is potentially infinite, there is a high amount of data requiring sublinear space, and a high speed of arrival requiring sublinear time per example. The document outlines an approach using ADWIN, an adaptive sliding window algorithm, to detect concept drift and adapt the window size accordingly.
In any given function in java, what do I need to look for in order t.pdfneetuarya13
In any given function in java, what do I need to look for in order to determine its space
complexity? thank you
Solution
You need to identify that is there any type of memory your creating locally in function to process
current execution of the function.
Type of memory generally can be created in function are:
1. local variables . ex int a, double y, String z etc : Space complexity : O(1) => constant
2. An Array, ArrayList, LinkedList, HashMap etc of size K => Space complexity : O(k)
For example:
void merge(int arr[], int l, int m, int r)
{
// Find sizes of two subarrays to be merged
int n1 = m - l + 1;
int n2 = r - m;
/* Create temp arrays */
int L[] = new int [n1]; // WE ARE CREATING EXTRA MEMORY
int R[] = new int [n2]; // WE ARE CREATING EXTRA MEMORY
/*Copy data to temp arrays*/
for (int i=0; i.
Programming Language Memory Models: What do Shared Variables Mean?greenwop
The document discusses the challenges of defining memory models for shared memory parallel programs. It argues that there is emerging consensus around an interleaving semantics called Sequential Consistency, but only for programs that are free of data races. This allows for important compiler and hardware optimizations while restricting reordering around synchronization. However, languages like Java cannot outlaw all data races, so the meaning of programs with races remains unclear. The document explores some speculative solutions to address this major open problem.
This document discusses algorithms and their analysis. It begins by defining an algorithm and its key characteristics like being finite, definite, and terminating after a finite number of steps. It then discusses designing algorithms to minimize cost and analyzing algorithms to predict their performance. Various algorithm design techniques are covered like divide and conquer, binary search, and its recursive implementation. Asymptotic notations like Big-O, Omega, and Theta are introduced to analyze time and space complexity. Specific algorithms like merge sort, quicksort, and their recursive implementations are explained in detail.
The document provides an overview of FPGA routing, which is an important step in the CAD process that connects logic blocks placed on the FPGA. It discusses the routing resources in Xilinx FPGAs including connection boxes, switch boxes, and wire segments. It also describes the FPGA routing model commonly used in academia, which simplifies the island-style architecture of commercial FPGAs. Efficient routing aims to minimize wiring area and critical path lengths to improve circuit performance.
This document describes the design of a softcore microcontroller called UMASS-core that is implemented in an FPGA. The UMASS-core is based on the Microchip PIC16F84 8-bit microcontroller architecture but adds some additional features. It is written in VHDL and synthesized for a Spartan3 FPGA. The goal is to provide designers flexibility to customize the microcontroller configuration while maintaining compatibility with PIC16F84 assembly code.
This document describes the Taylor Decomposition System (TDS) data structures and optimization flow. TDS contains three main data structures - the Taylor Expansion Diagram (TED) which represents the algorithmic behavior, the Data Flow Graph (DFG) which visualizes the TED, and the Netlist (NTL) which interfaces with high-level synthesis tools. TDS takes in C/C++ designs through tools like GAUT, performs optimizations on the TED and DFG, and outputs optimized netlists to the synthesis tools for further processing and output of HDL code.
This document discusses the GAUT digital synthesis tool. It describes GAUT's main window which allows compiling C code to a control/data flow graph (CDFG) and then synthesizing the CDFG to VHDL. The CDFG to VHDL synthesis can generate registers and muxes in the VHDL output based on the specified pipeline cadence. Pipelining is not allowed if the new cadence is not a multiple of the clock cycle.
This debugging session summarizes an issue where the TDS tool was producing inconsistent behavior when run multiple times on the same input, sometimes outputting a latency of 10 and other times a latency of 9. The debugging found that the inconsistent behavior was caused by memory allocation occurring in different locations each time, resulting in TedNodes having different pointers and getting mapped to DfgNodes in a different order, changing the output. The solution taken was to modify the associative container to traverse nodes in a consistent list order rather than by key.
This debugging session addresses Bug 111 in TDS version 99. The bug occurred when extracting product terms from an expression during the decompose command. Specifically, when extracting the third product term (PT3), the sub-chain that was extracted as the first product term (PT1) was removed. The solution was to check if the top node of an extracted product term has parents still in the container, and if so, update the parents to point to the extracted variable rather than the removed nodes. This prevents extracted sub-chains from being deleted during later extractions.
This debugging session document summarizes fixing bug 119 in TDS software. The bug occurred when decomposing polynomials, where the top node of the removed chain was losing its backpointer information during the ted2dfg conversion. The solution was to copy the backpointer for the top node as well, matching the rest of the internal nodes. This ensured the full chain maintained connections when decomposed terms were removed.
This document describes debugging a reordering bug in the TDS software. The bug caused reordering to abort at 4% completion. By analyzing debug files, the author isolated the bug to an issue when nodes ai_12 and ai_4 were being swapped. The author then developed techniques like "print_cone" to reduce the test case size. Further analysis using visualization tools revealed the bug was caused by dangling node references during recursive reordering of parent nodes. The solution was to reorder nodes in a levelized manner to avoid reference issues.
1. Data structures
& algorithms
Basics: Part I
By Daniel Gomez-Prado
Sept 2012
Disclaimer: This tutorial may contain
errors, use it at your own discretion.
The slides were prepared for a class
review on basic data structures at
University of Massachusetts, Amherst.
http://www.dgomezpr.com
2. Outline
• Analysis of complexity
o Q1 Fall 2011 problems
• Classes, objects and containers
• Array
• Stack
• Queue
• (Single) Linked and Double Linked List
• Iterators
• Linear and Binary search
• Merge sort
• Quick sort
• Q2 – Q6, Fall 2011 problems
2
3. Analysis big-Oh
Random access memory
Your program Memory
Import java.util.*
class review {
Static public main() {
// this is a review
// for ECE242 exam
}
}
• Assumptions:
o Unlimited memory We have what we need: 1 Gb or 1,000 Tb
No hierarchical memory,
o All memory accesses takes 1 unit time
Cache, L1, L2, hard drive
3
4. Analysis big-Oh
Running time
PROGRAM OPERATIONS STEPS
int sum = 0; 1 assignment 1
for (i=0;i<128;i=++) i =1, 2, 3, 4 … 128 128
for (j = 128; j>0; j=j/2) j = 128,64,32,16,8,4,2,1 log2128+1
sum = sum + a[i][j]; 1 addition, 1 assignment 2
1+128*(log2128+1)*2
n could be the size of the stack, queue, list or the dimension of a matrix, etc.
In general we have an arbitrary number “n” instead of 128, in that case:
1+n*(log2n+1)*2 can we simplify the expression?
1+2n+2n*log2n YES!, By using big-Oh notation
we can specify the asymptotic
complexity of the algorithm
4
5. Analysis big-Oh
Definition
• Given functions f(n) and g(n):
o f(n) is said to be O(g(n))
o if and only if
• there are (exist) 2 positive constants, C>0 and N>0
o such that
• f(n) ≤ Cg(n) for every n>N
5
6. Analysis big-Oh
Example of definition usage
keywords
f(n) is given g(n) is given
Relationship between C & n
O(n*log2n) is true
for C=3 and n≥32 6
7. Analysis big-Oh
Example 1
State the asymptotic complexity of:
big-Oh
i. Print out middle element of an array of size n
arrays allow access to
any position randomly
recall
Your program Memory
Solution is:
O(1)
o All memory accesses takes 1
unit time
7
8. Analysis big-Oh
Example 1
ii. Print out the middle element of a linked list of size n
recall a linked list
head tail
next next next next
n/2
object object object memory object
locations
f(n) = n/2
Solution is: the asymptotic complexity is O(n)
8
9. Analysis big-Oh
Example 1
iii. Print out the odd elements of an array of size n
f(n) = n/2
Solution: the asymptotic complexity is O(n)
iv. Pop 10 elements from a stack that is implemented
with an array. Assume that the stacks contains n
elements and n > 10.
When in doubt, ASK! is n = 11 or is n > 1000 ?
f(n) = 10
Solution: the asymptotic complexity is O(1)
9
10. Classes and
objects
• The goal of a “class” (in object-oriented language)
o Encapsulate state and behavior
• A class is a blueprint that has
o a constructor to initialize its data members
o a destructor to tear down the object
o A coherent interface to interact with the object (public methods)
o Private methods unreachable from the outside
o The possibility to extend and inherit members from other classes
• An object is an instant of a class
• What are the benefits:
o Through inheritance, extensions, packages allows to structure a program
o Exposes behavior that could be reused
o Alleviates the problem of understanding somebody else code
10
11. ADT
(Abstract Data Type)
• ADTs are containers
• ADTs are primarily concern in:
o Aggregation of data
o Access of data
o Efficiency of memory is used
o Efficiency of the container access
11
12. Arrays
• Contiguous blocks of memory of a data type
o Any position can be randomly access
• Example
o Int[] integer_array = new int[1024];
int size
0 1023
Java takes care of the
memory management
o ObjectY[] object_y_array = new ObjectY[512]; for you.
1,934,218
ObjectY size ObjectZ
ObjectX 0 511 0
… …
512 ObjectsY
fixed boundary 12
13. Stacks
• Enforce a LIFO behavior (last in, first out)
o It is based on an array
o It overrides the random access of an array by a LIFO access
1023 1023 1023
isEmpty isEmpty isEmpty
true false false
peek push pop
pop status
peek index
push
0 0 push 0 13
14. Stacks
…
ObjectZ
• Enforce a LIFO behavior (last in, first out)
o It is based on an array
o It overrides the random access of an array by a LIFO access
0 1024
1023
Recall what a class encapsulates isEmpty
o Status & false
o Behavior
pop
Does it mean we are always safe
o index = -1,
stack is empty, good
o index = 1024, peek index
o refuse to push objects
o overflow, runtime exception
0 14
-1
15. Queues
• Enforce a FIFO behavior (first in, first out)
o It is based on an array
o It overrides the random access of an array by a FIFO access
1023 1023 1023
isEmpty isEmpty isEmpty
true false false status
peek enqueue peek index1
index2
dequeue
enqueue
0 0 enqueue 0 dequeue 15
16. Queues
• Enforce a FIFO behavior (first in, first out)
o It is based on an array
o It overrides the random access of an array by a FIFO access
1023
Recall what a class encapsulates isEmpty
o Status & false status
o Behavior
peek index1
Does it mean we are always safe >
o index1 = index2,
stack is empty, good index2
o index1 or index2 = 1024, dequeue
o rewind to 0
o test condition
o Increment using mod 1024
o What if index2 > index1 0 dequeue 16
17. Queues
• Enforce a FIFO behavior (first in, first out)
o It is based on an array
o It overrides the random access of an array by a FIFO access
1023 1023
status
index2 status
peek index1
> >
index2
peek index1 dequeue
0 0 dequeue 17
18. Is everything an Array?
can we use something else?
beginning end
• Recall an array
o Contiguous memory ObjectX 0
ObjectY N-1 0 ObjectZ
… …
o Fixed bound size
fixed boundary
o Random access
How do we know in this container?
• Let’s use another construct o the beginning, the end
o which element is next
o Non contiguous memory
o Unlimited size ObjectX ObjectY ObjectZ
…
o Sequential access only
…
head next
edges
prev next
Node head
object
18
19. Linked List
• Use the prior construct (node and edge)
head
next next next next
…
object object object
push next
pop
peek
object
19
20. Double linked List
• Use the prior construct (node and edge)
head
prev next prev next prev next … prev next
object object object
20
21. Quick Questions
Can we do:
• a linked list or double linked list from an array
o Yes
• a queue with nodes and edges
o Why not.
• a stack with nodes and edges
o Sure
21
22. Iterators encapsulate
container traversals
• we have two implementations of a stack
1023
peek push pop
pop prev next
prev next
head
prev next prev 4 next
peek index
1 2 3 4
prev 3 next
prev 2 next
0 push
prev 1 next 22
23. Iterators encapsulate
container traversals
• we have two implementations of a stack
Traverse container behavior
1023 according to container rules
state
update next
update prev prev next
increment by 1 head
decrement by 1 prev next prev 5 next
peek index
1 2 3 4 5
prev 4 next
prev 3 next
prev 2 next
0 push
prev 1 next 23
24. Searching
Linear vs Binary
• If you make no assumptions
o iterate (traverse) all elements to find an existing element
o iterate (traverse) all elements to realize you don’t have an element
Looking for u worst case all elements
a x z b n m l j i b c u are visited. O(n)
• If you assume the container is already order
(according to certain rule)
o Iterate back/forth skipping some elements to speed up the process
worst case there are log(n)+1
a b b c i j l n m u x z element visited. O(log(n))
Looking for u
24
25. So binary search is faster
but the assumption is…
• The container is already order, so
o how do we sort a container
o how do insert elements in a sorted container
o How do we remove elements in a sorted container
• What is more expensive (big-Oh)
o A linear search
o Order a container and then a binary search
o Maintain a container sorted and then a binary search
25
26. Sorting a container
Merge sort
• Use the divide and conquer approach
o Divide the problem into 2 subsets (unless you have a base case)
o Recursively solve each subset
o Conquer: Solve the sub-problem and merge them into a solution
85 24 63 45 19 37 91 56
DIVIDE
Wait, what? 24 45
85 24 63 45
85 19 37 91 56
CONQUER RECUR
85 85
24 24 63 63
45 45
take the min
85 85
24
24 45
24
63 63
45
26
27. Sorting a container
Merge sort
• What is the complexity of merge sort
o Divide the problem into 2 subsets (2 times half the problem)
o Recursively solve each subset (keep subdividing the problem)
o Conquer: Solve the sub-problem and merge (the merge running time)
f(n) 85 24 63 45 19 37 91 56
2f(n/2) 85 24 63 45 19 37 91 56
4f(n/4) 85 24 63 45
log2(n)
24 85 x elements
O(x+y) 24 45
45 63 y elements
27
29. Sorting a container
Merge sort
• Drawback of merge sort algorithm
o The merge is not in place
take the min
Additional memory
85
24 45
63
• The merge could be modified to be in place, but
the overhead will slow down the running time.
29
30. Sorting a container
Quick sort
• Use the divide and conquer approach
o Divide the problem into 3 subsets (unless you have a base case)
• A (random) pivot x
• A subset with numbers lower than x
• A subset with numbers greater than x
o Recursively solve each subset
o Conquer: Solve the sub-problem and merge them into a solution
85 24 63 19 37 91 56 45
24 37 19 85 63 91 56
In place sorting,
24 37
it does not require additional memory
30
31. Sorting a container
Quick sort
• Complexity
o Quick sort (with random pivot) is O(nlogn)
• Drawback
o No replicated element allowed in the container
* The pivot is randomly chosen,
* if an element occurs twice, in different divisions
* then the merging mechanism won’t work
31
32. Arrays
Example 2
Accept 2 integer Arrays: A and B. And find the number of
common elements in both assuming no duplicates in each array.
o Brute force A, n elements
O(nm)
B, m elements
o Merge-sort modified
C, n+m elements
Instead of merging compare and increment count when equal
O( (n+m)log(n+m) )
32
33. Stacks and Queues
Example 3
a) Write a reverseQueue method using only stacks
and queues
in in out in
a b c d e f g h
a b c d e f g h
h g f e d c b a
O(n)
out out
Queue Stack Queue
FIFO LIFO FIFO
33
34. Stacks and Queues
Example 4
b) Write a method cutQueue that adds an element to the head
of the queue using only stacks and queues
N O(n)
in in out in out in
h g f e d c b a N
a b c d e f g h
N a b c d e f g h
N a b c d e f g h
out
out
Queue
FIFO Stack Stack Queue
LIFO LIFO FIFO 34
35. List
Example 5
Write a method is_Sorted_Ascedent to check if a
single linked list is sorted in non-decreasing order
head
next next next … next
Java pseudo code while ( node.next ) {
if (node.next.key < node.key)
return false;
node = node.next;
}
return true;
35
36. List
Example 6
Write a method compress to remove duplicated
elements in a single linked list
head
next next next … next
Java pseudo code while ( node.next ) {
if ( node.key == node.next.key) ) {
node.next = node.next.next;
} else {
node = node.next;
}
}
36