A catalogue of self-experimentation data analysis. Ideas on how to implement data science and come to conclusions about various aspects of your life using your own data.
How to use data to improve software development teams and processes. Presented at the Prairie Dev Con Deliver conference October 2016. http://www.prdcdeliver.com
Experimental Design Scientific Method and GraphingREVISED.pptMathandScienced
Experimental Design Scientific Method and Graphing. Scientific method. Graphing and experimental science. Chemistry and learning . Problem solving to the degree of fluency
Data Science Interview Questions | Data Science Interview Questions And Answe...Simplilearn
This video on Data science interview questions will take you through some of the most popular questions that you face in your Data science interviews. It’s simply impossible to ignore the importance of data and our capacity to analyze, consolidate, and contextualize it. Data scientists are relied upon to fill this need, but there is a serious dearth of qualified candidates worldwide. If you’re moving down the path to be a data scientist, you need to be prepared to impress prospective employers with your knowledge. In addition to explaining why data science is so important, you’ll need to show that you're technically proficient with Big Data concepts, frameworks, and applications. So, here we discuss the list of most popular questions you can expect in an interview and how to frame your answers.
Why learn Data Science?
Data Scientists are being deployed in all kinds of industries, creating a huge demand for skilled professionals. The data scientist is the pinnacle rank in an analytics organization. Glassdoor has ranked data scientist first in the 25 Best Jobs for 2016, and good data scientists are scarce and in great demand. As a data, you will be required to understand the business problem, design the analysis, collect and format the required data, apply algorithms or techniques using the correct tools, and finally make recommendations backed by data.
You can gain in-depth knowledge of Data Science by taking our Data Science with python certification training course. With Simplilearn’s Data Science certification training course, you will prepare for a career as a Data Scientist as you master all the concepts and techniques. Those who complete the course will be able to:
1. Gain an in-depth understanding of data science processes, data wrangling, data exploration, data visualization, hypothesis building, and testing. You will also learn the basics of statistics.
Install the required Python environment and other auxiliary tools and libraries
2. Understand the essential concepts of Python programming such as data types, tuples, lists, dicts, basic operators and functions
3. Perform high-level mathematical computing using the NumPy package and its large library of mathematical functions
Perform scientific and technical computing using the SciPy package and its sub-packages such as Integrate, Optimize, Statistics, IO and Weave
4. Perform data analysis and manipulation using data structures and tools provided in the Pandas package
5. Gain expertise in machine learning using the Scikit-Learn package
Learn more at www.simplilearn.com/big-data-and-analytics/python-for-data-science-training
How to use data to improve software development teams and processes. Presented at the Prairie Dev Con Deliver conference October 2016. http://www.prdcdeliver.com
Experimental Design Scientific Method and GraphingREVISED.pptMathandScienced
Experimental Design Scientific Method and Graphing. Scientific method. Graphing and experimental science. Chemistry and learning . Problem solving to the degree of fluency
Data Science Interview Questions | Data Science Interview Questions And Answe...Simplilearn
This video on Data science interview questions will take you through some of the most popular questions that you face in your Data science interviews. It’s simply impossible to ignore the importance of data and our capacity to analyze, consolidate, and contextualize it. Data scientists are relied upon to fill this need, but there is a serious dearth of qualified candidates worldwide. If you’re moving down the path to be a data scientist, you need to be prepared to impress prospective employers with your knowledge. In addition to explaining why data science is so important, you’ll need to show that you're technically proficient with Big Data concepts, frameworks, and applications. So, here we discuss the list of most popular questions you can expect in an interview and how to frame your answers.
Why learn Data Science?
Data Scientists are being deployed in all kinds of industries, creating a huge demand for skilled professionals. The data scientist is the pinnacle rank in an analytics organization. Glassdoor has ranked data scientist first in the 25 Best Jobs for 2016, and good data scientists are scarce and in great demand. As a data, you will be required to understand the business problem, design the analysis, collect and format the required data, apply algorithms or techniques using the correct tools, and finally make recommendations backed by data.
You can gain in-depth knowledge of Data Science by taking our Data Science with python certification training course. With Simplilearn’s Data Science certification training course, you will prepare for a career as a Data Scientist as you master all the concepts and techniques. Those who complete the course will be able to:
1. Gain an in-depth understanding of data science processes, data wrangling, data exploration, data visualization, hypothesis building, and testing. You will also learn the basics of statistics.
Install the required Python environment and other auxiliary tools and libraries
2. Understand the essential concepts of Python programming such as data types, tuples, lists, dicts, basic operators and functions
3. Perform high-level mathematical computing using the NumPy package and its large library of mathematical functions
Perform scientific and technical computing using the SciPy package and its sub-packages such as Integrate, Optimize, Statistics, IO and Weave
4. Perform data analysis and manipulation using data structures and tools provided in the Pandas package
5. Gain expertise in machine learning using the Scikit-Learn package
Learn more at www.simplilearn.com/big-data-and-analytics/python-for-data-science-training
#1NWebinar: Digital Blindspots - A Q&A on Common Marketing Analytics HurdlesOne North
Although we have all kinds of technology at our fingertips, marketers continue to struggle to quantify and report on the effectiveness of their activities. In this Q&A-style #1NWebinar, Senior Data Strategist Ben Magnuson sat down with One North’s Marketing Coordinator Olivia Koivisto to discuss common data analytics and reporting questions from B2B and professional services marketers. During the session, Ben explained what to look for in analytics tools, how to identify which data points matter, the importance of goal-setting, and more.
Watch the recording: https://youtu.be/RsQZxFLfYnI
Agile Analysis 101: Agile Stats v Command & Control MathsAxelisys Limited
Introducing Agile teams to Statistical Analysis. It's the tool that will help them self-manage and I introduce simple methods to measure efficacy. We also compare and contrast the traditional use of mathematics for command and control versus statistics and learning for contemporary agile development and EA.
Estimation is associated with Fear, Uncertainty and Death marches. Most of us would rather not estimate. Yet, sometimes we do need estimates and commitments, even on "estimation-less" projects. Play a series of estimation games to experience how different techniques deliver very different results. Learn a few simple rules that turn you into a reliable estimator. But correct estimates aren't enough. See what else is required to deliver on your promises. Learn to deal with the destructive games people play with estimates. Estimating can be Fun, embracing Uncertainty and Delivering.
Quettra Design Problem Solution - Deepti Chafekarquettra
Quettra gives interview candidates a design problem to work on at home. Here's a sample response. Details of the interview process at http://www.quettra.com/blog/the-design-interview
Coaching teams in creative problem solvingFlowa Oy
Agile has helped teams to collaborate and organize work better. That’s great. Better teamwork and better understanding of the work definitely helps a team to do right things. Agile has also lead the way toward technical practices such as Continuous Integration and Delivery, Test Driven Development and SOLID-architecture principles. Great, these things definitely help the team to do things right.
Then again, most of the time in software projects goes into problem solving and similar creative acts. Agile has relatively little to give on these areas. Currently, agile is not about creativity nor is it about problem solving.
This coaching circle session will focus on the creative core of software development: solving creatively novel, original and broad problems more effectively all the time. I will introduce some principles and tools I’ve found useful when helping people to solve hard problems and to find creative solutions.
Dowhy: An end-to-end library for causal inferenceAmit Sharma
In addition to efficient statistical estimators of a treatment's effect, successful application of causal inference requires specifying assumptions about the mechanisms underlying observed data and testing whether they are valid, and to what extent. However, most libraries for causal inference focus only on the task of providing powerful statistical estimators. We describe DoWhy, an open-source Python library that is built with causal assumptions as its first-class citizens, based on the formal framework of causal graphs to specify and test causal assumptions. DoWhy presents an API for the four steps common to any causal analysis---1) modeling the data using a causal graph and structural assumptions, 2) identifying whether the desired effect is estimable under the causal model, 3) estimating the effect using statistical estimators, and finally 4) refuting the obtained estimate through robustness checks and sensitivity analyses. In particular, DoWhy implements a number of robustness checks including placebo tests, bootstrap tests, and tests for unoberved confounding. DoWhy is an extensible library that supports interoperability with other implementations, such as EconML and CausalML for the the estimation step.
Chris Soderquist presentation at the 2016 Science of HOPE
Description:
This session will introduce participants to a powerful approach to orchestrating useful learning across difficult boundaries using system dynamics. Through real world examples and interactive exercises, participants will learn how system dynamics can help them gain far more useful leverage when addressing complex, adaptive challenges. Participants will also see how this approach was used in a project funded by the Foundation for Healthy Generations to guide strategic decisions in Washington (and other states) for building community capacity and resilience.
slides from a talk given at the SF Data Mining meetup event on 4/6/2017, in Oakland.
There are many ways to segment users - marketers typically want to define personas based on interviews with a few potential users, while UX researchers try and segment users based on intentions. By contrasts, data analysts can create segments based on behavioral data observed on all users, without trying to impute a user’s intention or persona. Of course all three approaches are complementary and needed for different purposes. The quantitative approach can help inform opportunities and offers a framework for tracking user growth, user engagement, and funnel conversion. Using VSCO’s active audience as an example, we will take a deep dive on how to apply clustering methods to identify segments and measure their evolution over time, while avoiding idiosyncratic pitfalls.
Machine Learning Tutorial Part - 1 | Machine Learning Tutorial For Beginners ...Simplilearn
This presentation on Machine Learning will help you understand why Machine Learning came into picture, what is Machine Learning, types of Machine Learning, Machine Learning algorithms with a detailed explanation on linear regression, decision tree & support vector machine and at the end you will also see a use case implementation where we classify whether a recipe is of a cupcake or muffin using SVM algorithm. Machine learning is a core sub-area of artificial intelligence; it enables computers to get into a mode of self-learning without being explicitly programmed. When exposed to new data, these computer programs are enabled to learn, grow, change, and develop by themselves. So, to put simply, the iterative aspect of machine learning is the ability to adapt to new data independently. Now, let us get started with this Machine Learning presentation and understand what it is and why it matters.
Below topics are explained in this Machine Learning presentation:
1. Why Machine Learning?
2. What is Machine Learning?
3. Types of Machine Learning
4. Machine Learning Algorithms
- Linear Regression
- Decision Trees
- Support Vector Machine
5. Use case: Classify whether a recipe is of a cupcake or a muffin using SVM
About Simplilearn Machine Learning course:
A form of artificial intelligence, Machine Learning is revolutionizing the world of computing as well as all people’s digital interactions. Machine Learning powers such innovative automated technologies as recommendation engines, facial recognition, fraud protection and even self-driving cars. This Machine Learning course prepares engineers, data scientists and other professionals with knowledge and hands-on skills required for certification and job competency in Machine Learning.
Why learn Machine Learning?
Machine Learning is taking over the world- and with that, there is a growing need among companies for professionals to know the ins and outs of Machine Learning
The Machine Learning market size is expected to grow from USD 1.03 Billion in 2016 to USD 8.81 Billion by 2022, at a Compound Annual Growth Rate (CAGR) of 44.1% during the forecast period.
We recommend this Machine Learning training course for the following professionals in particular:
1. Developers aspiring to be a data scientist or Machine Learning engineer
2. Information architects who want to gain expertise in Machine Learning algorithms
3. Analytics professionals who want to work in Machine Learning or artificial intelligence
4. Graduates looking to build a career in data science and Machine Learning
Learn more at: https://www.simplilearn.com/
#1NWebinar: Digital Blindspots - A Q&A on Common Marketing Analytics HurdlesOne North
Although we have all kinds of technology at our fingertips, marketers continue to struggle to quantify and report on the effectiveness of their activities. In this Q&A-style #1NWebinar, Senior Data Strategist Ben Magnuson sat down with One North’s Marketing Coordinator Olivia Koivisto to discuss common data analytics and reporting questions from B2B and professional services marketers. During the session, Ben explained what to look for in analytics tools, how to identify which data points matter, the importance of goal-setting, and more.
Watch the recording: https://youtu.be/RsQZxFLfYnI
Agile Analysis 101: Agile Stats v Command & Control MathsAxelisys Limited
Introducing Agile teams to Statistical Analysis. It's the tool that will help them self-manage and I introduce simple methods to measure efficacy. We also compare and contrast the traditional use of mathematics for command and control versus statistics and learning for contemporary agile development and EA.
Estimation is associated with Fear, Uncertainty and Death marches. Most of us would rather not estimate. Yet, sometimes we do need estimates and commitments, even on "estimation-less" projects. Play a series of estimation games to experience how different techniques deliver very different results. Learn a few simple rules that turn you into a reliable estimator. But correct estimates aren't enough. See what else is required to deliver on your promises. Learn to deal with the destructive games people play with estimates. Estimating can be Fun, embracing Uncertainty and Delivering.
Quettra Design Problem Solution - Deepti Chafekarquettra
Quettra gives interview candidates a design problem to work on at home. Here's a sample response. Details of the interview process at http://www.quettra.com/blog/the-design-interview
Coaching teams in creative problem solvingFlowa Oy
Agile has helped teams to collaborate and organize work better. That’s great. Better teamwork and better understanding of the work definitely helps a team to do right things. Agile has also lead the way toward technical practices such as Continuous Integration and Delivery, Test Driven Development and SOLID-architecture principles. Great, these things definitely help the team to do things right.
Then again, most of the time in software projects goes into problem solving and similar creative acts. Agile has relatively little to give on these areas. Currently, agile is not about creativity nor is it about problem solving.
This coaching circle session will focus on the creative core of software development: solving creatively novel, original and broad problems more effectively all the time. I will introduce some principles and tools I’ve found useful when helping people to solve hard problems and to find creative solutions.
Dowhy: An end-to-end library for causal inferenceAmit Sharma
In addition to efficient statistical estimators of a treatment's effect, successful application of causal inference requires specifying assumptions about the mechanisms underlying observed data and testing whether they are valid, and to what extent. However, most libraries for causal inference focus only on the task of providing powerful statistical estimators. We describe DoWhy, an open-source Python library that is built with causal assumptions as its first-class citizens, based on the formal framework of causal graphs to specify and test causal assumptions. DoWhy presents an API for the four steps common to any causal analysis---1) modeling the data using a causal graph and structural assumptions, 2) identifying whether the desired effect is estimable under the causal model, 3) estimating the effect using statistical estimators, and finally 4) refuting the obtained estimate through robustness checks and sensitivity analyses. In particular, DoWhy implements a number of robustness checks including placebo tests, bootstrap tests, and tests for unoberved confounding. DoWhy is an extensible library that supports interoperability with other implementations, such as EconML and CausalML for the the estimation step.
Chris Soderquist presentation at the 2016 Science of HOPE
Description:
This session will introduce participants to a powerful approach to orchestrating useful learning across difficult boundaries using system dynamics. Through real world examples and interactive exercises, participants will learn how system dynamics can help them gain far more useful leverage when addressing complex, adaptive challenges. Participants will also see how this approach was used in a project funded by the Foundation for Healthy Generations to guide strategic decisions in Washington (and other states) for building community capacity and resilience.
slides from a talk given at the SF Data Mining meetup event on 4/6/2017, in Oakland.
There are many ways to segment users - marketers typically want to define personas based on interviews with a few potential users, while UX researchers try and segment users based on intentions. By contrasts, data analysts can create segments based on behavioral data observed on all users, without trying to impute a user’s intention or persona. Of course all three approaches are complementary and needed for different purposes. The quantitative approach can help inform opportunities and offers a framework for tracking user growth, user engagement, and funnel conversion. Using VSCO’s active audience as an example, we will take a deep dive on how to apply clustering methods to identify segments and measure their evolution over time, while avoiding idiosyncratic pitfalls.
Machine Learning Tutorial Part - 1 | Machine Learning Tutorial For Beginners ...Simplilearn
This presentation on Machine Learning will help you understand why Machine Learning came into picture, what is Machine Learning, types of Machine Learning, Machine Learning algorithms with a detailed explanation on linear regression, decision tree & support vector machine and at the end you will also see a use case implementation where we classify whether a recipe is of a cupcake or muffin using SVM algorithm. Machine learning is a core sub-area of artificial intelligence; it enables computers to get into a mode of self-learning without being explicitly programmed. When exposed to new data, these computer programs are enabled to learn, grow, change, and develop by themselves. So, to put simply, the iterative aspect of machine learning is the ability to adapt to new data independently. Now, let us get started with this Machine Learning presentation and understand what it is and why it matters.
Below topics are explained in this Machine Learning presentation:
1. Why Machine Learning?
2. What is Machine Learning?
3. Types of Machine Learning
4. Machine Learning Algorithms
- Linear Regression
- Decision Trees
- Support Vector Machine
5. Use case: Classify whether a recipe is of a cupcake or a muffin using SVM
About Simplilearn Machine Learning course:
A form of artificial intelligence, Machine Learning is revolutionizing the world of computing as well as all people’s digital interactions. Machine Learning powers such innovative automated technologies as recommendation engines, facial recognition, fraud protection and even self-driving cars. This Machine Learning course prepares engineers, data scientists and other professionals with knowledge and hands-on skills required for certification and job competency in Machine Learning.
Why learn Machine Learning?
Machine Learning is taking over the world- and with that, there is a growing need among companies for professionals to know the ins and outs of Machine Learning
The Machine Learning market size is expected to grow from USD 1.03 Billion in 2016 to USD 8.81 Billion by 2022, at a Compound Annual Growth Rate (CAGR) of 44.1% during the forecast period.
We recommend this Machine Learning training course for the following professionals in particular:
1. Developers aspiring to be a data scientist or Machine Learning engineer
2. Information architects who want to gain expertise in Machine Learning algorithms
3. Analytics professionals who want to work in Machine Learning or artificial intelligence
4. Graduates looking to build a career in data science and Machine Learning
Learn more at: https://www.simplilearn.com/
Opendatabay - Open Data Marketplace.pptxOpendatabay
Opendatabay.com unlocks the power of data for everyone. Open Data Marketplace fosters a collaborative hub for data enthusiasts to explore, share, and contribute to a vast collection of datasets.
First ever open hub for data enthusiasts to collaborate and innovate. A platform to explore, share, and contribute to a vast collection of datasets. Through robust quality control and innovative technologies like blockchain verification, opendatabay ensures the authenticity and reliability of datasets, empowering users to make data-driven decisions with confidence. Leverage cutting-edge AI technologies to enhance the data exploration, analysis, and discovery experience.
From intelligent search and recommendations to automated data productisation and quotation, Opendatabay AI-driven features streamline the data workflow. Finding the data you need shouldn't be a complex. Opendatabay simplifies the data acquisition process with an intuitive interface and robust search tools. Effortlessly explore, discover, and access the data you need, allowing you to focus on extracting valuable insights. Opendatabay breaks new ground with a dedicated, AI-generated, synthetic datasets.
Leverage these privacy-preserving datasets for training and testing AI models without compromising sensitive information. Opendatabay prioritizes transparency by providing detailed metadata, provenance information, and usage guidelines for each dataset, ensuring users have a comprehensive understanding of the data they're working with. By leveraging a powerful combination of distributed ledger technology and rigorous third-party audits Opendatabay ensures the authenticity and reliability of every dataset. Security is at the core of Opendatabay. Marketplace implements stringent security measures, including encryption, access controls, and regular vulnerability assessments, to safeguard your data and protect your privacy.
Show drafts
volume_up
Empowering the Data Analytics Ecosystem: A Laser Focus on Value
The data analytics ecosystem thrives when every component functions at its peak, unlocking the true potential of data. Here's a laser focus on key areas for an empowered ecosystem:
1. Democratize Access, Not Data:
Granular Access Controls: Provide users with self-service tools tailored to their specific needs, preventing data overload and misuse.
Data Catalogs: Implement robust data catalogs for easy discovery and understanding of available data sources.
2. Foster Collaboration with Clear Roles:
Data Mesh Architecture: Break down data silos by creating a distributed data ownership model with clear ownership and responsibilities.
Collaborative Workspaces: Utilize interactive platforms where data scientists, analysts, and domain experts can work seamlessly together.
3. Leverage Advanced Analytics Strategically:
AI-powered Automation: Automate repetitive tasks like data cleaning and feature engineering, freeing up data talent for higher-level analysis.
Right-Tool Selection: Strategically choose the most effective advanced analytics techniques (e.g., AI, ML) based on specific business problems.
4. Prioritize Data Quality with Automation:
Automated Data Validation: Implement automated data quality checks to identify and rectify errors at the source, minimizing downstream issues.
Data Lineage Tracking: Track the flow of data throughout the ecosystem, ensuring transparency and facilitating root cause analysis for errors.
5. Cultivate a Data-Driven Mindset:
Metrics-Driven Performance Management: Align KPIs and performance metrics with data-driven insights to ensure actionable decision making.
Data Storytelling Workshops: Equip stakeholders with the skills to translate complex data findings into compelling narratives that drive action.
Benefits of a Precise Ecosystem:
Sharpened Focus: Precise access and clear roles ensure everyone works with the most relevant data, maximizing efficiency.
Actionable Insights: Strategic analytics and automated quality checks lead to more reliable and actionable data insights.
Continuous Improvement: Data-driven performance management fosters a culture of learning and continuous improvement.
Sustainable Growth: Empowered by data, organizations can make informed decisions to drive sustainable growth and innovation.
By focusing on these precise actions, organizations can create an empowered data analytics ecosystem that delivers real value by driving data-driven decisions and maximizing the return on their data investment.
Adjusting primitives for graph : SHORT REPORT / NOTESSubhajit Sahu
Graph algorithms, like PageRank Compressed Sparse Row (CSR) is an adjacency-list based graph representation that is
Multiply with different modes (map)
1. Performance of sequential execution based vs OpenMP based vector multiply.
2. Comparing various launch configs for CUDA based vector multiply.
Sum with different storage types (reduce)
1. Performance of vector element sum using float vs bfloat16 as the storage type.
Sum with different modes (reduce)
1. Performance of sequential execution based vs OpenMP based vector element sum.
2. Performance of memcpy vs in-place based CUDA based vector element sum.
3. Comparing various launch configs for CUDA based vector element sum (memcpy).
4. Comparing various launch configs for CUDA based vector element sum (in-place).
Sum with in-place strategies of CUDA mode (reduce)
1. Comparing various launch configs for CUDA based vector element sum (in-place).
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...John Andrews
SlideShare Description for "Chatty Kathy - UNC Bootcamp Final Project Presentation"
Title: Chatty Kathy: Enhancing Physical Activity Among Older Adults
Description:
Discover how Chatty Kathy, an innovative project developed at the UNC Bootcamp, aims to tackle the challenge of low physical activity among older adults. Our AI-driven solution uses peer interaction to boost and sustain exercise levels, significantly improving health outcomes. This presentation covers our problem statement, the rationale behind Chatty Kathy, synthetic data and persona creation, model performance metrics, a visual demonstration of the project, and potential future developments. Join us for an insightful Q&A session to explore the potential of this groundbreaking project.
Project Team: Jay Requarth, Jana Avery, John Andrews, Dr. Dick Davis II, Nee Buntoum, Nam Yeongjin & Mat Nicholas
5. How may it apply to you?
• Use your own data!
My burning questions:
• What makes me happy?
• Am I getting better at running?
• What book should I read next?
8. So, what makes me happy?
• I tracked my mood every day in 2017…
• To see the
effect of
activities on
mood
9. Dataset
• Outcome is mood as a binary variable
• Binary or categorical predictor variables
• Looking at the effect of predictors on outcome in a
logistic regression model
daylio<-read_csv("daylio_2017.csv")
10. Descriptive statistics
• Quite optimistic in
general
• Haven’t quite defined
an ‘awful’ day…
• Possibly an upturn
at the weekends?
14. But am I getting better at running?
• Maximum Aerobic
Function (MAF) method
• In a nutshell: train at your
maximum aerobic heart rate in
order to build aerobic base fitness
and thus get faster at the same
heart rate
• Multilevel regression of the effect of heart
rate, slope and distance, on pace, for
multiple runs over time
18. Now, please recommend me a book
• Accessed reviews of the books I had on my
list, using the Goodreads API
• Created a dataset of ratings for multiple
books from multiple users (including me)
• Built a
recommender
system to
recommend
me a top 5
19. Goodreads API…
• Create booklist from my read (rated) and
shelved (would like a recommendation on)
books<-read_csv("goodreads_library_export.csv")
library(rgoodreads)
Sys.setenv(GOODREADS_KEY="7U8VDuR3phc4vD1WQF1g")
for (i in 1:n) {
for (j in 1:m){
ri<-j+(i-1)*m
tryCatch({
c1<-review(ri)
c2<-book(gsub(".*:","",c1$book))
c2$id<-as.numeric(c2$id)
if (c2$id %in% booklist$id) {
id[i,j]<-c2$id
rrating[i,j]<-c1$rating
rbooks<-data.frame(rrating[,j],id[,j])
}
}, error=function(e){cat("ERROR :",conditionMessage(e), "n")})
}
}
• Used rgoodreads package to request reviews from
API then save them if they are on the booklist
20. You can tell what books are popular already…
Who doesn’t rate The Catcher in the Rye 5/5???
…actually Liz didn’t