This slides accompanied a talk I gave on January 27, 2016, at Startup Edmonton's "Lunchalytics" speaker series. The event was held in the Mercer Warehouse, 10359 104 Street Northwest, Edmonton, Alberta, Canada.
ICOS: Integrated Carbon Observation System Open data to open our eyes to clim...Blue BRIDGE
Presentation by Harry Lankreijer, ICOS-Carbon Portal, Lund University, Sweden.
ICOS is a pan-European research infrastructure (RI) for observing and understanding the greenhouse gas (GHG) balance of Europe and its adjacent regions. The major task of ICOS is to collect and make available in a transparent manner, the high-quality observational data from its state-of-the-art measurement stations. These ICOS data – from atmosphere, ecosystem and ocean stations – will contribute to research aiming to describe and understand the present state of the global carbon cycle. The Carbon Portal will be the virtual data center that present the data products and make it available. This presentation will briefly present the work of ICOS and the Carbon Portal towards open data with FAIR principles. ICOS has an open data policy with free use, requesting the user to give appropriate credit (Creative Commons Attribution 4.0 ). The Carbon Portal is developing a data catalogue using an ontology based on a semantic metadata description. This will make it possible to integrate ICOS observations with data from other RI’s as well with data of global networks. For integration, the Carbon Portal is actively following the developments of international standards for eg. metadata and data citation.
A short introduction to the world of open data and the opportunities it creates. The slides are from my presentation at the GoOpen 2009 conference in Oslo, Norway.
Presentation by Daniele Bailo, INGV, Italy
EPOS has been designed with the vision of creating a pan-European infrastructure for solid Earth science to support a safe and sustainable society. In accordance with this scientific vision, the EPOS mission is to integrate the diverse and advanced European Research Infrastructures for solid Earth science relying on new e-science opportunities to monitor and unravel the dynamic and complex Earth System. EPOS will enable innovative multidisciplinary research for a better understanding of the Earth’s physical and chemical processes that control earthquakes, volcanic eruptions, ground instability and tsunami as well as the processes driving tectonics and Earth’s surface dynamics. To accomplish its mission, EPOS is engaging different stakeholders, not limited to scientists, to allow the Earth sciences to open new horizons in our understanding of the planet. Through integration of data, models and facilities, EPOS will allow the Earth science community to make a step change in developing new concepts and tools for key answers to scientific and socio-economic questions concerning geo-hazards and geo-resources as well as Earth sciences applications to the environment and human welfare.
The development of QSAR models is critically dependent on the quality of available data. As part of our efforts to develop public platforms to provide access to predictive models, we have attempted to discriminate the influence of the quality versus quantity of data available to develop and validate QSAR models. We have focused our efforts on the widely used EPISuite software that was initially developed over two decades ago and, specifically, on the PHYSPROP dataset used to train the EPISuite prediction models. This presentation will review our approaches to examining key datasets, the delivery of curated data and the development of machine-learning models for thirteen separate property endpoints of interest to environmental science. We will also review how these data will be made freely accessible to the community via a new “chemistry dashboard”. This abstract does not reflect U.S. EPA policy
Presenter(s): Ruth Baker, Jeffrey Mortimore.
Presenters offered simple strategies for content creation and management that maximize opportunities for repurposing content across delivery platforms while keeping maintenance to a minimum. Strategies for content creation include
naming, description, and chunking-that support repurposing of content for multiple audiences and support contexts. The presentation also covers strategies for dynamically mapping content across platforms that eliminates any need to monitor platform content separately. While this session will focus on LibGuides and LibAnswers, the content creation techniques discussed are applicable to any content management system that supports dynamic content mapping and/or external widget creation.
Presenter(s): Jeffrey Mortimore.
As federal funding requirements continue to evolve and more publishers are requiring open data sharing as a condition of publication, academic libraries have an important role to play supporting campus researchers’ data management needs. This session explores in detail the National Science Foundation’s current data management requirements, giving special attention to data planning as part of the NSF’s grant application process.
On Friday September 16th I was honored with the award for the North Carolina American Chemical Society Distinguished Speaker Award and got to review the past 20 years of my career. This was my short intro bio
"Antony Williams is a Ph.D. NMR spectroscopist and cheminformatician who has worked in academia, government, a Fortune 500 company, and two start-ups. He is co-founder of the free online chemical database ChemSpider, originally started as a hobby project and ultimately acquired by the Royal Society of Chemistry (in the UK) and now used by over 50,000 users per day. He is now a computational chemist at the Environmental Protection Agency in the National Center for Computational Toxicology and is focused on developing web applications to support data dissemination and progress efforts in allowing for faster and cheaper approaches to identify potential toxicological effects of chemicals. He has published >180 papers, >25 book chapters and a number of books. He is known as the ChemConnector on social networks. "
This presentation was a webinar update to the Open PHACTS community regarding the release of the OPen PHACTS open source components of the Chemical Registration System and, more specifically, the Chemical validation and Standardization Platform. The need for a community set of rules was driven home with the Chemical validation and Standardization Platform potentially being an example platform for the rules.
ICOS: Integrated Carbon Observation System Open data to open our eyes to clim...Blue BRIDGE
Presentation by Harry Lankreijer, ICOS-Carbon Portal, Lund University, Sweden.
ICOS is a pan-European research infrastructure (RI) for observing and understanding the greenhouse gas (GHG) balance of Europe and its adjacent regions. The major task of ICOS is to collect and make available in a transparent manner, the high-quality observational data from its state-of-the-art measurement stations. These ICOS data – from atmosphere, ecosystem and ocean stations – will contribute to research aiming to describe and understand the present state of the global carbon cycle. The Carbon Portal will be the virtual data center that present the data products and make it available. This presentation will briefly present the work of ICOS and the Carbon Portal towards open data with FAIR principles. ICOS has an open data policy with free use, requesting the user to give appropriate credit (Creative Commons Attribution 4.0 ). The Carbon Portal is developing a data catalogue using an ontology based on a semantic metadata description. This will make it possible to integrate ICOS observations with data from other RI’s as well with data of global networks. For integration, the Carbon Portal is actively following the developments of international standards for eg. metadata and data citation.
A short introduction to the world of open data and the opportunities it creates. The slides are from my presentation at the GoOpen 2009 conference in Oslo, Norway.
Presentation by Daniele Bailo, INGV, Italy
EPOS has been designed with the vision of creating a pan-European infrastructure for solid Earth science to support a safe and sustainable society. In accordance with this scientific vision, the EPOS mission is to integrate the diverse and advanced European Research Infrastructures for solid Earth science relying on new e-science opportunities to monitor and unravel the dynamic and complex Earth System. EPOS will enable innovative multidisciplinary research for a better understanding of the Earth’s physical and chemical processes that control earthquakes, volcanic eruptions, ground instability and tsunami as well as the processes driving tectonics and Earth’s surface dynamics. To accomplish its mission, EPOS is engaging different stakeholders, not limited to scientists, to allow the Earth sciences to open new horizons in our understanding of the planet. Through integration of data, models and facilities, EPOS will allow the Earth science community to make a step change in developing new concepts and tools for key answers to scientific and socio-economic questions concerning geo-hazards and geo-resources as well as Earth sciences applications to the environment and human welfare.
The development of QSAR models is critically dependent on the quality of available data. As part of our efforts to develop public platforms to provide access to predictive models, we have attempted to discriminate the influence of the quality versus quantity of data available to develop and validate QSAR models. We have focused our efforts on the widely used EPISuite software that was initially developed over two decades ago and, specifically, on the PHYSPROP dataset used to train the EPISuite prediction models. This presentation will review our approaches to examining key datasets, the delivery of curated data and the development of machine-learning models for thirteen separate property endpoints of interest to environmental science. We will also review how these data will be made freely accessible to the community via a new “chemistry dashboard”. This abstract does not reflect U.S. EPA policy
Presenter(s): Ruth Baker, Jeffrey Mortimore.
Presenters offered simple strategies for content creation and management that maximize opportunities for repurposing content across delivery platforms while keeping maintenance to a minimum. Strategies for content creation include
naming, description, and chunking-that support repurposing of content for multiple audiences and support contexts. The presentation also covers strategies for dynamically mapping content across platforms that eliminates any need to monitor platform content separately. While this session will focus on LibGuides and LibAnswers, the content creation techniques discussed are applicable to any content management system that supports dynamic content mapping and/or external widget creation.
Presenter(s): Jeffrey Mortimore.
As federal funding requirements continue to evolve and more publishers are requiring open data sharing as a condition of publication, academic libraries have an important role to play supporting campus researchers’ data management needs. This session explores in detail the National Science Foundation’s current data management requirements, giving special attention to data planning as part of the NSF’s grant application process.
On Friday September 16th I was honored with the award for the North Carolina American Chemical Society Distinguished Speaker Award and got to review the past 20 years of my career. This was my short intro bio
"Antony Williams is a Ph.D. NMR spectroscopist and cheminformatician who has worked in academia, government, a Fortune 500 company, and two start-ups. He is co-founder of the free online chemical database ChemSpider, originally started as a hobby project and ultimately acquired by the Royal Society of Chemistry (in the UK) and now used by over 50,000 users per day. He is now a computational chemist at the Environmental Protection Agency in the National Center for Computational Toxicology and is focused on developing web applications to support data dissemination and progress efforts in allowing for faster and cheaper approaches to identify potential toxicological effects of chemicals. He has published >180 papers, >25 book chapters and a number of books. He is known as the ChemConnector on social networks. "
This presentation was a webinar update to the Open PHACTS community regarding the release of the OPen PHACTS open source components of the Chemical Registration System and, more specifically, the Chemical validation and Standardization Platform. The need for a community set of rules was driven home with the Chemical validation and Standardization Platform potentially being an example platform for the rules.
Using Ecological Momentary Assessment to Examine Post-food Consumption Affect...Yue Liao
We used smartphone app to prompt brief electronic surveys to assess a sample of mothers' eating behaviors and feelings randomly throughout their daily lives.
This presentation highlights known challenges with the production of high quality chemical databases and outline recent efforts made to address these challenges. Specific examples will be provided illustrating these challenges within the U.S. Environmental Protection Agency (EPA) Computational Toxicology Program. This includes consolidating EPA’s ACToR and DSSTox databases, augmenting computed properties and list search features, and introducing quality metrics to assess confidence in chemical structure assignments across hundreds of thousands of chemical substance records. The past decade has seen enormous investments in the generation and release of data from studies of chemicals and their toxicological effects. There is, however, commonly little concern given to provenance and, more generally, to the quality of the data. The presentation will emphasize the importance of rigorous data review procedures, progress in web-based public access to accurate chemical data sets for use in predictive modeling, and the benefits that these efforts will deliver to toxicologists to embrace the “Big Data” era.
This abstract does not necessarily represent the views of the U.S. Environmental Protection Agency.
Researchers at the EPA’s National Center for Computational Toxicology integrate advances in biology, chemistry, and computer science to examine the toxicity of chemicals and help prioritize chemicals for further research based on potential human health risks. The intention of this research program is to quickly evaluate thousands of chemicals for potential risk but with much reduced cost relative to historical approaches. This work involves computational and data driven approaches including high-throughput screening, modeling, text-mining and the integration of chemistry, exposure and biological data. We have developed a number of databases and applications that are delivering on the vision of developing a deeper understanding of chemicals and their effects on exposure and biological processes that are supporting a large community of scientists in their research efforts. This presentation will provide an overview of our work to bring together diverse large scale data from the chemical and biological domains, our approaches to integrate and disseminate these data, and the delivery of models supporting computational toxicology. This abstract does not reflect U.S. EPA policy.
SMS Berlin 2016 Cultural Perspectives on Strategic ManagementJoel Gehman
Strategic Management Society 2016 Conference
Berlin, Germany
Sunday, September 18
Session 253 - Cultural Perspectives on Strategic Management
Track J
Session Chair
Joel Gehman, University of Alberta
Krsto Pandza, University of Leeds
Session Panelists
Shahzad Ansari, University of Cambridge
Rodolphe Durand, HEC-Paris
Candace Jones, University of Edinburgh Business School
Michael Lounsbury, University of Alberta
Richard Whittington, University of Oxford
This session aims to spark conversations between scholars at the intersection of strategic management and organization theory. In particular, we hope the event will generate awareness of, stimulate interest in, and set direction for research at the SM-OT interface. Especially, the panelists will address potential connections between perennial strategy topics such as resources, capabilities, innovation, competition, governance, nonmarket strategy and strategy process and practice and topics of central interest to organization theory such as institutional logics, organizational forms, legitimacy, creativity, framing and categories. Panellist will identify the most promising questions that could benefit from integrating strategy and organizational theory concepts as well as discussing possible challenges of such a theoretical bricolage.
The construction of QSAR models is critically dependent on the quality of available data. As part of our efforts to develop public platforms to provide access to predictive models, we have attempted to discriminate the influence of the quality versus quantity of data available to develop and validate QSAR models. We have focused our efforts on the widely used EPISuite software that was initially developed over two decades ago. Specific examples of quality issues for the EPISuite data include multiple records for the same chemical structure with different measured property values, inconsistency between the structure, chemical name and CAS registry number for single records, the inability to convert the SMILES strings into chemical structures, hypervalency in the chemical structures and the absence of stereochemistry for thousands of data records. Relative to the era of EPISuite development, modern cheminformatics tools allow for more advanced capabilities in terms of chemical structure representation and storage, as well as enabling automated data validation and standardization approaches to examine data quality. This presentation will review both our manual and automated approaches to examining key datasets related to the EPISuite training and test data. This includes approaches to validate between chemical structure representations (e.g. molfile and SMILES) and identifiers (chemical names and registry numbers), as well as approaches to standardize the data into QSAR-consumable formats for modeling. We have quantified and segregated the data into various quality categories to allow us to thoroughly investigate the resulting models that can be developed from these data slices and to examine to what extent efforts into the development of large high-quality datasets have the expected pay-off in terms of prediction performance. This abstract does not reflect U.S. EPA policy.
There is a growing need for rapid chemical screening and prioritization to inform regulatory decision-making on thousands of chemicals in the environment. We have previously used high-resolution mass spectrometry to examine household vacuum dust samples using liquid chromatography time-of-flight mass spectrometry (LC-TOF/MS). Using a combination of exact mass, isotope distribution, and isotope spacing, molecular features were matched with a list of chemical formulas from the EPA’s Distributed Structure-Searchable Toxicity (DSSTox) database. This has further developed our understanding of how openly available chemical databases, together with the appropriate searches, could be used for the purpose of compound identification. We report here on the utility of the EPA’s iCSS Chemistry Dashboard for the purpose of compound identification using searches against a database of over 720,000 chemicals. We also examine the benefits of QSAR prediction for the purpose of retention time prediction to allow for alignment of both chromatographic and mass spectral properties. This abstract does not reflect U.S. EPA policy.
The iCSS Chemistry Dashboard is a publicly accessible dashboard provided by the National Center for Computation Toxicology at the US-EPA. It serves a number of purposes, including providing a chemistry database underpinning many of our public-facing projects (e.g. ToxCast and ExpoCast). The available data and searches provide a valuable path to structure identification using mass spectrometry as the source data. With an underlying database of over 720,000 chemicals, the dashboard has already been used to assist in identifying chemicals present in house dust. However, it can also be applied to many other purposes, e.g., the identification of agrochemicals in waste streams. This presentation will provide a review of the EPA’s platform and underlying algorithms used for the purpose of compound identification using high-resolution mass spectrometry data. We will also discuss progress towards a high-throughput non-targeted analysis platform for use by the mass spectrometry community. This abstract does not reflect U.S. EPA policy.
The iCSS CompTox Dashboard is a publicly accessible dashboard provided by the National Center for Computation Toxicology at the US-EPA. It serves a number of purposes, including providing a chemistry database underpinning many of our public-facing projects (e.g. ToxCast and ExpoCast). The available data and searches provide a valuable path to structure identification using mass spectrometry as the source data. With an underlying database of over 720,000 chemicals, the dashboard has already been used to assist in identifying chemicals present in house dust. However, it can also be applied to many other purposes, e.g., the identification of agrochemicals in waste streams. This presentation will provide a review of the EPA’s platform and underlying algorithms used for the purpose of compound identification using high-resolution mass spectrometry data. In order to examine its performance for structure identification, especially in terms of rank-ordering database hits, we have compared it with the ChemSpider database, a well-regarded public database that has become one of the community standards for structure identification. The study has shown that the CompTox Dashboard outperforms ChemSpider in terms of structure identification and ranking providing improved outcomes for mass spectrometry analysis of “known unknowns”.
The U.S. Environmental Protection Agency (EPA) Computational Toxicology Program integrates advances in biology, chemistry, and computer science to help prioritize chemicals for further research based on potential human health risks. This work involves computational and data driven approaches that integrate chemistry, exposure and biological data. We have delivered public access to terabytes of open data, as well to a large number of publicly accessible databases and applications, to support the research efforts for a large community of scientists. Many of our contributions to science are summarily described in research papers but to date we have not optimized our contributions to inform altmetrics statistics associated with our work. Critically missing from altmetrics is access to our numerous software applications and web service accesses, as well as the growing importance of our experimental data and models (e.g ToxCast, ExpoCast, DSSTox and others) to the scientific and regulatory communities. This presentation will provide an overview of our efforts to more fully understand, and quantify, our impact on the environmental sciences using a combination of our measurement approaches and available altmetrics tools. This abstract does not reflect U.S. EPA policy.
There is a growing need for rapid chemical screening and prioritization to inform regulatory decision-making on thousands of chemicals in the environment. We have previously used high-resolution mass spectrometry to examine household vacuum dust samples using liquid chromatography time-of-flight mass spectrometry (LC-TOF/MS). Using a combination of exact mass, isotope distribution, and isotope spacing, molecular features were matched with a list of chemical formulas from the EPA’s Distributed Structure-Searchable Toxicity (DSSTox) database. This has further developed our understanding of how openly available chemical databases, together with the appropriate searches, could be used for the purpose of compound identification. We report here on the utility of the EPA’s iCSS Chemistry Dashboard for the purpose of compound identification using searches against a database of over 720,000 chemicals. We also examine the benefits of QSAR prediction for the purpose of retention time prediction to allow for alignment of both chromatographic and mass spectral properties. This abstract does not reflect U.S. EPA policy.
Researchers at EPA’s National Center for Computational Toxicology integrate advances in biology, chemistry, and computer science to examine the toxicity of chemicals and help prioritize chemicals for further research based on potential human health risks. The goal of this research program is to quickly evaluate thousands of chemicals, but at a much reduced cost and shorter time frame relative to traditional approaches. The data generated by the Center includes characterization of thousands of chemicals across hundreds of high-throughput screening assays, consumer use and production information, pharmacokinetic properties, literature data, physical-chemical properties as well as the predictive computational modeling of toxicity and exposure. We have developed a number of databases and applications to deliver the data to the public, academic community, industry stakeholders, and regulators. This presentation will provide an overview of our work to develop an architecture that integrates diverse large-scale data from the chemical and biological domains, our approaches to disseminate these data, and the delivery of models supporting predictive computational toxicology. In particular, this presentation will review our new publicly-accessible CompTox Dashboard as the first application built on our newly developed architecture. This abstract does not reflect U.S. EPA policy.
The iCSS CompTox Chemistry Dashboard is a publicly accessible dashboard provided by the National Center for Computation Toxicology at the US-EPA. It serves a number of purposes, including providing a chemistry database underpinning many of our public-facing projects (e.g. ToxCast and ExpoCast). The available data and searches provide a valuable path to structure identification using mass spectrometry as the source data. With an underlying database of over 720,000 chemicals, the dashboard has already been used to assist in identifying chemicals present in house dust. This poster reviews the benefits of the EPA’s platform and underlying algorithms used for the purpose of compound identification using high-resolution mass spectrometry data. Standard approaches for both mass and formula lookup are available but the dashboard delivers a novel approach for hit ranking based on functional use of the chemicals. The focus on high-quality data, novel ranking approaches and integration to other resources of value to mass spectrometrists makes the CompTox Dashboard a valuable resource for the identification of environmental chemicals. This abstract does not reflect U.S. EPA policy.
As part of our efforts to develop a public platform to provide access to predictive models we have attempted to disentangle the influence of the quality versus quantity of data available to develop and validate QSAR models. Using a thorough manual review of the data underlying the well-known EPI Suite software, we developed automated processes for the validation of the data using a KNIME workflow. This includes: approaches to validate different chemical structure representations (e.g. molfile and SMILES), identifiers (chemical names and registry numbers), and methods to standardize the data into QSAR-consumable formats for modeling. Our efforts to quantify and segregate data into various quality categories has allowed us to thoroughly investigate the resulting models developed from these data slices, as well as allowing us to examine whether or not efforts into the development of large high-quality datasets has the expected pay-off in terms of prediction performance. Machine-learning approaches have been applied to create a series of models that have been used to generate predicted physicochemical and environmental parameters for over 700,000 chemicals. These data are available online via the EPA’s iCSS Chemistry Dashboard. This abstract does not reflect U.S. EPA policy.
Despite the availability of many platforms for scientists to connect and share with their peers in the scientific community the majority do not make use of these tools, despite their promise and potential impact and influence on our careers. We are already being indexed and exposed on the internet via our publications, presentations and data and new “AltMetric scores” are being assigned to scientific publications as measures of popularity and, supposedly, of impact. We now have even more ways to contribute to science, to annotate and curate data, to “publish” in new ways, and many of these activities are as part of a growing crowdsourcing network. This presentation provides an overview of the various types of networking and collaborative sites available to scientists and ways to expose your scientific activities online. It will discuss the new world of AltMetrics that is in an explosive growth curve and will help you understand how to influence and leverage some of these new measures. Participating online, whether it be simply for career advancement or for wider exposure of your research, there are now a series of web applications that can provide a great opportunity to develop a scientific profile within the community.
Presenter(s): Jeffrey Mortimore, Jessica Garner, Jermaine Bryant, Jessica Williams.
Interlibrary Loan (ILL) requests reveal a lot about our collections, from development needs to access issues. This session focuses on how ILL and Technical Services personnel at Georgia Southern University are using ILL request information to troubleshoot and improve electronic resource access across our collections.
Shaping Expectations: Defining and Refining the Role of Technical Services in...NASIG
From trial to implementation, technical services staff play an important role in shaping awareness of, and expectations for, new resources. Internally, technical services staff provide information and instruction to public services staff. Externally, they influence how new resources are integrated into the library website and other platforms. With appropriate “message control,” technical services staff can positively influence awareness of new resources while keeping everyone’s expectations in check.
During fall 2015, technical services staff at Georgia Southern University adopted a protocol for new resource rollouts that explicitly times and structures internal and external communications to ensure that all library staff are ready to support new resources as they go live. This protocol focuses on providing appropriate lead-time notifications to public services staff and “training the trainers” first, prior to releasing any external communications. Furthermore, this protocol integrates with activities of the library’s promotion committee, supporting smooth transition to public services promotion of new resources.
During this session, presenters will discuss this protocol in detail, with special emphasis on timing of internal and external communications, the importance of providing sufficient staff training and support materials early on, and the importance of maintaining objectivity and accuracy in all rollout communications and assets. Presenters will share protocol planning tools and worksheets, describe how these are integrated into implementation workflows, and engage participants in discussion about the role of technical services in new resource rollouts.
Presenters:
Jeff Mortimore & Debra Skinner
Zach S. Henderson Library
Georgia Southern University
Many of us nowadays invest significant amounts of time in sharing our activities and opinions with friends and family via social networking tools such as Facebook, Twitter or other related websites. However, despite the availability of many platforms for scientists to connect and share with their peers in the scientific community the majority do not make use of these tools, despite their promise and potential impact and influence on our careers. We are already being indexed and exposed on the internet via our publications, presentations and data and new “AltMetric scores” are being assigned to scientific publications as measures of popularity and, supposedly, of impact. We now have even more ways to contribute to science, to annotate and curate data, to “publish” in new ways, and many of these activities are as part of a growing crowdsourcing network. This presentation provides an overview of the various types of networking and collaborative sites available to scientists and ways to expose your scientific activities online. It will discuss the new world of AltMetrics that is in an explosive growth curve and will help you understand how to influence and leverage some of these new measures. Participating online, whether it be simply for career advancement or for wider exposure of your research, there are now a series of web applications that can provide a great opportunity to develop a scientific profile within the community.
Web Preservation, or Managing your Organisation’s Online Presence After the O...lisbk
Slides for talk on "Web Preservation, or Managing your Organisation’s Online Presence After the Organisation Ceases to Exist" given by Brian Kelly, UK Web Focus at the IRMS 2016 conference in Brighton on 17 May 2016.
See http://ukwebfocus.com/events/irms-2016-web-preservation
Going Concerns: A Perspective from the Nexus of Business, Culture and Instit...Joel Gehman
These slides accompany a talk I gave on November 4, 2015, at the University of Alberta, in Edmonton, Canada, as part of the Office of the Vice President of Research Social Sciences and Humanities Research Council "Open Minds" event.
For a video of my talk, see: https://youtu.be/3lI5mUWCHZ8
Abstract
Climate change, water scarcity, urban poverty, hydraulic fracturing, social license. For organizations, especially businesses, the rapid emergence and escalation of such cultural concerns can pose significant strategic and technological challenges. For their part, governments also may be pressed to adapt and respond to cultural concerns through policy and regulation. There also can be interactive effects, with changes in one sphere cascading into another. I consider such concerns in the context of unconventional oil and gas development, and provide a brief overview WellWiki.org. I conclude by highlighting the importance of such research for helping us to articulate potential trajectories for living.
Per Peterson, chair of nuclear engineering at UC Berkeley, presents on the United States' nuclear waste policy and gives recommendations on future steps.
The NuClean Kick-Off workshop was held on Nov. 7, 2013 at the Handlery Union Square Hotel in San Francisco, CA, co-located with the AIChE 2013 Annual Meeting.
For more information on NuClean, visit: http://www.aiche.org/cei/conferences/nuclean-workshop/2013.
For more information on AIChE's Center for Energy Initiatives (CEI), visit: http://www.aiche.org/cei.
Regulatory Challenges to Alternative EnergyKevin Haroff
Presentation given on March 30, 2010, at Cornell Law School in Ithaca, New York. Titled “Environmental and Regulatory Challenges to Developing Energy Alternatives – a Case Study,” the presentation focused on difficulties companies face when seeking regulatory approvals for proposed solar thermal energy projects in Southern California.
Using Ecological Momentary Assessment to Examine Post-food Consumption Affect...Yue Liao
We used smartphone app to prompt brief electronic surveys to assess a sample of mothers' eating behaviors and feelings randomly throughout their daily lives.
This presentation highlights known challenges with the production of high quality chemical databases and outline recent efforts made to address these challenges. Specific examples will be provided illustrating these challenges within the U.S. Environmental Protection Agency (EPA) Computational Toxicology Program. This includes consolidating EPA’s ACToR and DSSTox databases, augmenting computed properties and list search features, and introducing quality metrics to assess confidence in chemical structure assignments across hundreds of thousands of chemical substance records. The past decade has seen enormous investments in the generation and release of data from studies of chemicals and their toxicological effects. There is, however, commonly little concern given to provenance and, more generally, to the quality of the data. The presentation will emphasize the importance of rigorous data review procedures, progress in web-based public access to accurate chemical data sets for use in predictive modeling, and the benefits that these efforts will deliver to toxicologists to embrace the “Big Data” era.
This abstract does not necessarily represent the views of the U.S. Environmental Protection Agency.
Researchers at the EPA’s National Center for Computational Toxicology integrate advances in biology, chemistry, and computer science to examine the toxicity of chemicals and help prioritize chemicals for further research based on potential human health risks. The intention of this research program is to quickly evaluate thousands of chemicals for potential risk but with much reduced cost relative to historical approaches. This work involves computational and data driven approaches including high-throughput screening, modeling, text-mining and the integration of chemistry, exposure and biological data. We have developed a number of databases and applications that are delivering on the vision of developing a deeper understanding of chemicals and their effects on exposure and biological processes that are supporting a large community of scientists in their research efforts. This presentation will provide an overview of our work to bring together diverse large scale data from the chemical and biological domains, our approaches to integrate and disseminate these data, and the delivery of models supporting computational toxicology. This abstract does not reflect U.S. EPA policy.
SMS Berlin 2016 Cultural Perspectives on Strategic ManagementJoel Gehman
Strategic Management Society 2016 Conference
Berlin, Germany
Sunday, September 18
Session 253 - Cultural Perspectives on Strategic Management
Track J
Session Chair
Joel Gehman, University of Alberta
Krsto Pandza, University of Leeds
Session Panelists
Shahzad Ansari, University of Cambridge
Rodolphe Durand, HEC-Paris
Candace Jones, University of Edinburgh Business School
Michael Lounsbury, University of Alberta
Richard Whittington, University of Oxford
This session aims to spark conversations between scholars at the intersection of strategic management and organization theory. In particular, we hope the event will generate awareness of, stimulate interest in, and set direction for research at the SM-OT interface. Especially, the panelists will address potential connections between perennial strategy topics such as resources, capabilities, innovation, competition, governance, nonmarket strategy and strategy process and practice and topics of central interest to organization theory such as institutional logics, organizational forms, legitimacy, creativity, framing and categories. Panellist will identify the most promising questions that could benefit from integrating strategy and organizational theory concepts as well as discussing possible challenges of such a theoretical bricolage.
The construction of QSAR models is critically dependent on the quality of available data. As part of our efforts to develop public platforms to provide access to predictive models, we have attempted to discriminate the influence of the quality versus quantity of data available to develop and validate QSAR models. We have focused our efforts on the widely used EPISuite software that was initially developed over two decades ago. Specific examples of quality issues for the EPISuite data include multiple records for the same chemical structure with different measured property values, inconsistency between the structure, chemical name and CAS registry number for single records, the inability to convert the SMILES strings into chemical structures, hypervalency in the chemical structures and the absence of stereochemistry for thousands of data records. Relative to the era of EPISuite development, modern cheminformatics tools allow for more advanced capabilities in terms of chemical structure representation and storage, as well as enabling automated data validation and standardization approaches to examine data quality. This presentation will review both our manual and automated approaches to examining key datasets related to the EPISuite training and test data. This includes approaches to validate between chemical structure representations (e.g. molfile and SMILES) and identifiers (chemical names and registry numbers), as well as approaches to standardize the data into QSAR-consumable formats for modeling. We have quantified and segregated the data into various quality categories to allow us to thoroughly investigate the resulting models that can be developed from these data slices and to examine to what extent efforts into the development of large high-quality datasets have the expected pay-off in terms of prediction performance. This abstract does not reflect U.S. EPA policy.
There is a growing need for rapid chemical screening and prioritization to inform regulatory decision-making on thousands of chemicals in the environment. We have previously used high-resolution mass spectrometry to examine household vacuum dust samples using liquid chromatography time-of-flight mass spectrometry (LC-TOF/MS). Using a combination of exact mass, isotope distribution, and isotope spacing, molecular features were matched with a list of chemical formulas from the EPA’s Distributed Structure-Searchable Toxicity (DSSTox) database. This has further developed our understanding of how openly available chemical databases, together with the appropriate searches, could be used for the purpose of compound identification. We report here on the utility of the EPA’s iCSS Chemistry Dashboard for the purpose of compound identification using searches against a database of over 720,000 chemicals. We also examine the benefits of QSAR prediction for the purpose of retention time prediction to allow for alignment of both chromatographic and mass spectral properties. This abstract does not reflect U.S. EPA policy.
The iCSS Chemistry Dashboard is a publicly accessible dashboard provided by the National Center for Computation Toxicology at the US-EPA. It serves a number of purposes, including providing a chemistry database underpinning many of our public-facing projects (e.g. ToxCast and ExpoCast). The available data and searches provide a valuable path to structure identification using mass spectrometry as the source data. With an underlying database of over 720,000 chemicals, the dashboard has already been used to assist in identifying chemicals present in house dust. However, it can also be applied to many other purposes, e.g., the identification of agrochemicals in waste streams. This presentation will provide a review of the EPA’s platform and underlying algorithms used for the purpose of compound identification using high-resolution mass spectrometry data. We will also discuss progress towards a high-throughput non-targeted analysis platform for use by the mass spectrometry community. This abstract does not reflect U.S. EPA policy.
The iCSS CompTox Dashboard is a publicly accessible dashboard provided by the National Center for Computation Toxicology at the US-EPA. It serves a number of purposes, including providing a chemistry database underpinning many of our public-facing projects (e.g. ToxCast and ExpoCast). The available data and searches provide a valuable path to structure identification using mass spectrometry as the source data. With an underlying database of over 720,000 chemicals, the dashboard has already been used to assist in identifying chemicals present in house dust. However, it can also be applied to many other purposes, e.g., the identification of agrochemicals in waste streams. This presentation will provide a review of the EPA’s platform and underlying algorithms used for the purpose of compound identification using high-resolution mass spectrometry data. In order to examine its performance for structure identification, especially in terms of rank-ordering database hits, we have compared it with the ChemSpider database, a well-regarded public database that has become one of the community standards for structure identification. The study has shown that the CompTox Dashboard outperforms ChemSpider in terms of structure identification and ranking providing improved outcomes for mass spectrometry analysis of “known unknowns”.
The U.S. Environmental Protection Agency (EPA) Computational Toxicology Program integrates advances in biology, chemistry, and computer science to help prioritize chemicals for further research based on potential human health risks. This work involves computational and data driven approaches that integrate chemistry, exposure and biological data. We have delivered public access to terabytes of open data, as well to a large number of publicly accessible databases and applications, to support the research efforts for a large community of scientists. Many of our contributions to science are summarily described in research papers but to date we have not optimized our contributions to inform altmetrics statistics associated with our work. Critically missing from altmetrics is access to our numerous software applications and web service accesses, as well as the growing importance of our experimental data and models (e.g ToxCast, ExpoCast, DSSTox and others) to the scientific and regulatory communities. This presentation will provide an overview of our efforts to more fully understand, and quantify, our impact on the environmental sciences using a combination of our measurement approaches and available altmetrics tools. This abstract does not reflect U.S. EPA policy.
There is a growing need for rapid chemical screening and prioritization to inform regulatory decision-making on thousands of chemicals in the environment. We have previously used high-resolution mass spectrometry to examine household vacuum dust samples using liquid chromatography time-of-flight mass spectrometry (LC-TOF/MS). Using a combination of exact mass, isotope distribution, and isotope spacing, molecular features were matched with a list of chemical formulas from the EPA’s Distributed Structure-Searchable Toxicity (DSSTox) database. This has further developed our understanding of how openly available chemical databases, together with the appropriate searches, could be used for the purpose of compound identification. We report here on the utility of the EPA’s iCSS Chemistry Dashboard for the purpose of compound identification using searches against a database of over 720,000 chemicals. We also examine the benefits of QSAR prediction for the purpose of retention time prediction to allow for alignment of both chromatographic and mass spectral properties. This abstract does not reflect U.S. EPA policy.
Researchers at EPA’s National Center for Computational Toxicology integrate advances in biology, chemistry, and computer science to examine the toxicity of chemicals and help prioritize chemicals for further research based on potential human health risks. The goal of this research program is to quickly evaluate thousands of chemicals, but at a much reduced cost and shorter time frame relative to traditional approaches. The data generated by the Center includes characterization of thousands of chemicals across hundreds of high-throughput screening assays, consumer use and production information, pharmacokinetic properties, literature data, physical-chemical properties as well as the predictive computational modeling of toxicity and exposure. We have developed a number of databases and applications to deliver the data to the public, academic community, industry stakeholders, and regulators. This presentation will provide an overview of our work to develop an architecture that integrates diverse large-scale data from the chemical and biological domains, our approaches to disseminate these data, and the delivery of models supporting predictive computational toxicology. In particular, this presentation will review our new publicly-accessible CompTox Dashboard as the first application built on our newly developed architecture. This abstract does not reflect U.S. EPA policy.
The iCSS CompTox Chemistry Dashboard is a publicly accessible dashboard provided by the National Center for Computation Toxicology at the US-EPA. It serves a number of purposes, including providing a chemistry database underpinning many of our public-facing projects (e.g. ToxCast and ExpoCast). The available data and searches provide a valuable path to structure identification using mass spectrometry as the source data. With an underlying database of over 720,000 chemicals, the dashboard has already been used to assist in identifying chemicals present in house dust. This poster reviews the benefits of the EPA’s platform and underlying algorithms used for the purpose of compound identification using high-resolution mass spectrometry data. Standard approaches for both mass and formula lookup are available but the dashboard delivers a novel approach for hit ranking based on functional use of the chemicals. The focus on high-quality data, novel ranking approaches and integration to other resources of value to mass spectrometrists makes the CompTox Dashboard a valuable resource for the identification of environmental chemicals. This abstract does not reflect U.S. EPA policy.
As part of our efforts to develop a public platform to provide access to predictive models we have attempted to disentangle the influence of the quality versus quantity of data available to develop and validate QSAR models. Using a thorough manual review of the data underlying the well-known EPI Suite software, we developed automated processes for the validation of the data using a KNIME workflow. This includes: approaches to validate different chemical structure representations (e.g. molfile and SMILES), identifiers (chemical names and registry numbers), and methods to standardize the data into QSAR-consumable formats for modeling. Our efforts to quantify and segregate data into various quality categories has allowed us to thoroughly investigate the resulting models developed from these data slices, as well as allowing us to examine whether or not efforts into the development of large high-quality datasets has the expected pay-off in terms of prediction performance. Machine-learning approaches have been applied to create a series of models that have been used to generate predicted physicochemical and environmental parameters for over 700,000 chemicals. These data are available online via the EPA’s iCSS Chemistry Dashboard. This abstract does not reflect U.S. EPA policy.
Despite the availability of many platforms for scientists to connect and share with their peers in the scientific community the majority do not make use of these tools, despite their promise and potential impact and influence on our careers. We are already being indexed and exposed on the internet via our publications, presentations and data and new “AltMetric scores” are being assigned to scientific publications as measures of popularity and, supposedly, of impact. We now have even more ways to contribute to science, to annotate and curate data, to “publish” in new ways, and many of these activities are as part of a growing crowdsourcing network. This presentation provides an overview of the various types of networking and collaborative sites available to scientists and ways to expose your scientific activities online. It will discuss the new world of AltMetrics that is in an explosive growth curve and will help you understand how to influence and leverage some of these new measures. Participating online, whether it be simply for career advancement or for wider exposure of your research, there are now a series of web applications that can provide a great opportunity to develop a scientific profile within the community.
Presenter(s): Jeffrey Mortimore, Jessica Garner, Jermaine Bryant, Jessica Williams.
Interlibrary Loan (ILL) requests reveal a lot about our collections, from development needs to access issues. This session focuses on how ILL and Technical Services personnel at Georgia Southern University are using ILL request information to troubleshoot and improve electronic resource access across our collections.
Shaping Expectations: Defining and Refining the Role of Technical Services in...NASIG
From trial to implementation, technical services staff play an important role in shaping awareness of, and expectations for, new resources. Internally, technical services staff provide information and instruction to public services staff. Externally, they influence how new resources are integrated into the library website and other platforms. With appropriate “message control,” technical services staff can positively influence awareness of new resources while keeping everyone’s expectations in check.
During fall 2015, technical services staff at Georgia Southern University adopted a protocol for new resource rollouts that explicitly times and structures internal and external communications to ensure that all library staff are ready to support new resources as they go live. This protocol focuses on providing appropriate lead-time notifications to public services staff and “training the trainers” first, prior to releasing any external communications. Furthermore, this protocol integrates with activities of the library’s promotion committee, supporting smooth transition to public services promotion of new resources.
During this session, presenters will discuss this protocol in detail, with special emphasis on timing of internal and external communications, the importance of providing sufficient staff training and support materials early on, and the importance of maintaining objectivity and accuracy in all rollout communications and assets. Presenters will share protocol planning tools and worksheets, describe how these are integrated into implementation workflows, and engage participants in discussion about the role of technical services in new resource rollouts.
Presenters:
Jeff Mortimore & Debra Skinner
Zach S. Henderson Library
Georgia Southern University
Many of us nowadays invest significant amounts of time in sharing our activities and opinions with friends and family via social networking tools such as Facebook, Twitter or other related websites. However, despite the availability of many platforms for scientists to connect and share with their peers in the scientific community the majority do not make use of these tools, despite their promise and potential impact and influence on our careers. We are already being indexed and exposed on the internet via our publications, presentations and data and new “AltMetric scores” are being assigned to scientific publications as measures of popularity and, supposedly, of impact. We now have even more ways to contribute to science, to annotate and curate data, to “publish” in new ways, and many of these activities are as part of a growing crowdsourcing network. This presentation provides an overview of the various types of networking and collaborative sites available to scientists and ways to expose your scientific activities online. It will discuss the new world of AltMetrics that is in an explosive growth curve and will help you understand how to influence and leverage some of these new measures. Participating online, whether it be simply for career advancement or for wider exposure of your research, there are now a series of web applications that can provide a great opportunity to develop a scientific profile within the community.
Web Preservation, or Managing your Organisation’s Online Presence After the O...lisbk
Slides for talk on "Web Preservation, or Managing your Organisation’s Online Presence After the Organisation Ceases to Exist" given by Brian Kelly, UK Web Focus at the IRMS 2016 conference in Brighton on 17 May 2016.
See http://ukwebfocus.com/events/irms-2016-web-preservation
Going Concerns: A Perspective from the Nexus of Business, Culture and Instit...Joel Gehman
These slides accompany a talk I gave on November 4, 2015, at the University of Alberta, in Edmonton, Canada, as part of the Office of the Vice President of Research Social Sciences and Humanities Research Council "Open Minds" event.
For a video of my talk, see: https://youtu.be/3lI5mUWCHZ8
Abstract
Climate change, water scarcity, urban poverty, hydraulic fracturing, social license. For organizations, especially businesses, the rapid emergence and escalation of such cultural concerns can pose significant strategic and technological challenges. For their part, governments also may be pressed to adapt and respond to cultural concerns through policy and regulation. There also can be interactive effects, with changes in one sphere cascading into another. I consider such concerns in the context of unconventional oil and gas development, and provide a brief overview WellWiki.org. I conclude by highlighting the importance of such research for helping us to articulate potential trajectories for living.
Per Peterson, chair of nuclear engineering at UC Berkeley, presents on the United States' nuclear waste policy and gives recommendations on future steps.
The NuClean Kick-Off workshop was held on Nov. 7, 2013 at the Handlery Union Square Hotel in San Francisco, CA, co-located with the AIChE 2013 Annual Meeting.
For more information on NuClean, visit: http://www.aiche.org/cei/conferences/nuclean-workshop/2013.
For more information on AIChE's Center for Energy Initiatives (CEI), visit: http://www.aiche.org/cei.
Regulatory Challenges to Alternative EnergyKevin Haroff
Presentation given on March 30, 2010, at Cornell Law School in Ithaca, New York. Titled “Environmental and Regulatory Challenges to Developing Energy Alternatives – a Case Study,” the presentation focused on difficulties companies face when seeking regulatory approvals for proposed solar thermal energy projects in Southern California.
Collins Balcombe from the US Bureau of Reclamation discusses the new WIIN Act for the 2020 Central Texas Water Conservation Symposium hosted by the Texas Living Waters Project.
Open Access in Archaeology. Opening the Past, 2013, Pisa (PDF)ekansa
Because the formatting is messed up from the Open Office file, here's the same presentation (http://www.slideshare.net/ekansa/pisa-open-accesskansafinal) in PDF format.
Open Access in Archaeology. Opening the Past, 2013, Pisaekansa
My presentation on open access in archaeology, exploring the need for new forms of scholarly publication, dealing with information overload, the ethics of commodifying intellectual property in archaeology, and sustainability concerns
Presentation on the American Archive of Public Broadcasting at the 2015 Society of American Archivists conference in Cleveland, Ohio. AAPB staff presented on the history of the project, website development, metadata, Online Reading Room, value to scholars and researchers, and digital preservation. Panelists included Karen Cariani, AAPB Director at WGBH, Casey Davis, AAPB Project Manager at WGBH, Alan Gevinson, AAPB Director at the Library of Congress, and James Snyder, Senior Systems Administrator at the Library of Congress.
University of Alberta Strategy PresentationJoel Gehman
Presentation made on May 25, 2020 by Bill Flanagan to the University of Alberta General Faculties Council outlining the University's strategy for the next 2 and 5 years.
Imaging, Characterizing, and Modeling Canada’s Geothermal ResourcesJoel Gehman
Canada's geoscape possesses more potential geothermal energy than hydrocarbon energy, but numerous challenges must be overcome if this renewable resource is to be effectively harnessed. Reservoirs of geothermal energy must be located, characterized, and modeled. The nature of the interaction between rock at reservoir sites and geothermal fluids must be understood, and the potential costs of exploiting them in real-world scenarios must be understood. At the same, new engine technologies must be developed to enable generation of power from geothermal heat sources with non-ideal temperatures.
DOI10.13140/RG.2.2.23127.98725
Positively Deviant: Identity Work Through CertificationJoel Gehman
These slides accompanied a talk I gave at the 2017 Global B Corp Academic Community Roundtable on October 4, in Toronto, Ontario, Canada at the University of Toronto, Rotman School of Management. The research underlying the talk is accepted for publication in the Journal of Business Venturing.
B Academics: Status of Research on B Corps and Benefit CorporationsJoel Gehman
Presented at the 2017 Global B Corp Academic Community Roundtable on October 4, in Toronto, Ontario, Canada at the University of Toronto, Rotman School of Management. The presentation provides a brief overview of published research on B Corps and/or Benefit Corporations.
New Public-Private Partnership Aims to Accelerate Sustainability-Focused Inno...Joel Gehman
Research collaboration with Canadian universities, Alberta Innovates, Natural Resources Canada and Canada’s Oil Sands Innovation Alliance will build on ‘made in Canada’ innovation model
University of Alberta Future Energy Systems Fall 2017 PostcardJoel Gehman
Our future energy needs will not be met by one source, but many. Hydrocarbons, wind, solar, biomass, geothermal, hydro, nuclear,
and other technologies can all contribute to a complex system that meets our society’s increasing energy needs, while reducing our carbon footprint.
Future Energy Systems focuses on multidisciplinary research that develops the energy technologies of the near future, explores how these technologies can be integrated into our present-day infrastructure, and examines possible consequences for our society and the economy. It also contributes to the development of solutions for challenges presented by current energy systems, and considers the potential affects of new energy technologies.
Hidden Badge of Honor: How Contextual Distinctiveness Affects Category Promo...Joel Gehman
These slides accompanied a talk I gave at the 2016 Ivey Sustainability Conference on November 4, 2016, at the Ivey Business School in London, Ontario, Canada.
Sustainability in Management: Trends and Future Research DirectionsJoel Gehman
These slides accompanied a talk I gave as part of the closing plenary on "The Future of Research on Sustainability in Management" at the 2016 Ivey Sustainability Conference. Other panelists included Fabrizio Ferraro (IESE Business School) and Donal Crilly (London Business School). The session was moderated by Oana Branzei (Ivey Business School).
Donal, Fabrizio and I were each asked to submit a picture that captured our answers to the following five questions:
1. What do you think is the current position of sustainability in the management research landscape?
2. What do you believe are the current trends in research on sustainability?
3. Which areas/methods/topics do you think will be topical in the coming decade?
4. What would you personally like to see in 10 years?
5. Where do you see yourself as an academic in 15 years?
Innovation for Societal Impact: A Process PerspectiveJoel Gehman
Northern Advanced Research Training Initiative (NARTI) Developmental Workshop
Innovation for Societal Impact: A Process Perspective
Professors: Raghu Garud, Joel Gehman, and Krsto Pandza
Thursday, 26 September 2013
Leeds University Business School, Leeds, UK
We invite advanced Ph.D. students and junior faculty to present their research at a developmental workshop on innovation processes.
Metatheoretical Perspectives on Sustainability Journeys: Evolutionary, Relati...Joel Gehman
These slides accompanied a presentation I gave at the 2011 Academy of Management Annual Meeting in San Antonio, Texas, USA on Tuesday, August 16, 2011.
Theory-Method Packages: A Comparison of Three Qualitative Approaches to Theor...Joel Gehman
These slides were used to introduce a Showcase Symposium at the 2016 Academy of Management Annual Meeting in Anaheim, California on "Theory-Method Packages: A Comparison of Three Qualitative Approaches to Theory Building." The session was organized by Joel Gehman and Vern L. Glaser. Speakers included Kathleen M. Eisenhardt (Stanford University), Dennis A. Gioia (Pennsylvania State University), Ann Langley (HEC Montréal), and Kevin G. Corley (Arizona State University).
For the companion video, see: https://youtu.be/_JdOSCzSpMc
For the companion article, see: http://bit.ly/2x1XZcy
Tackling the World's Biggest Problems With Robust ActionJoel Gehman
By Fabrizio Ferraro, Dror Etzion, and Joel Gehman
With some issues, the stakes are so high and the details so complex that tackling them requires finding ways for many people, companies and governments to work together.
This article summarizes our award-winning 2015 article -- "Tackling Grand Challenges Pragmatically: Robust Action Revisited" -- which was published in Organization Studies.
Tradeoffs in Sustainability-Oriented InnovationsJoel Gehman
By Tima Bansal, Hadi Chapardar, and Joel Gehman
Sustainability means balancing short- and long-term priorities.
Published in MIT Sloan Management Review, Feb 17, 2016. http://sloanreview.mit.edu/article/tradeoffs-in-sustainability-oriented-innovations/
Sharing a wealth of information: how regulators can improve fracking disclosu...Joel Gehman
By Joel Gehman, Dror Etzion, and Miron Avidan
Both the information about fracking and the know-how about how best to employ it is available. Should they choose to do so, regulators can easily and cost-effectively adopt disclosure practices that will benefit Canadians and their communities.
This article appeared in The Hill Times, August 15, 2016. It summarizes research contained in: https://ssrn.com/abstract=2784468
What makes public disclosure effective?Joel Gehman
Huffington Post
By Miron Avidan, Dror Etzion and Joel Gehman
Hydraulic fracturing (“fracking”) is a technology employed in the production of oil and gas from unconventional shale formations. Over the last decade, tens of thousands of fracking wells have been drilled worldwide. Fracking often takes place in relatively populated areas, thus posing an array of risks to public health such as water contamination and induced seismicity. In addition to inspecting and monitoring these risks, regulators now face the challenge of keeping the public well informed about their extent.
In order to shed light on how to address this challenge, we recently published a report on “The Effectiveness of Fracking Disclosure Regimes in Canada.”
2017 Global B Corp Academic Community Roundtable -- Call for PapersJoel Gehman
On October 4, 2017, in parallel with the annual B Corp Champions Retreat in Toronto, Canada, we will host the Second Annual Global B Corporation Academic Community Roundtable. This year’s theme is “The Role of the Academic Community in Scaling the B Corp Movement.” This event is open to university faculty and students and other researchers conducting research on Certified B Corporations and/or Benefit Corporations (hereafter “B Corps”). We are especially interested in research that examines impact assessment methodologies and the impact of B Corps. Additionally, our preference is for empirical studies, whether qualitative or quantitative. In addition to presenting groundbreaking research on B Corps, the Roundtable will provide opportunities for researchers to discuss data sources and methods for studying B Corps, provide updates on their research in progress, and allow the academic community to develop a roadmap of current and future research opportunities.
Submission website: http://bit.ly/bsubmission
Registration website: http://bit.ly/broundtable
Call for Papers PDF: http://bit.ly/b2017call
Tackling Grand Challenges: Research Prospects at the Intersection of Robust ...Joel Gehman
These slides accompanied a talk I gave at the 2017 Academy of Management Annual Meeting, August 7, 2017, in Atlanta, Georgia, USA as part of a symposium on "Addressing Grand Challenges with Institutional Research: The Critical Role of Power." The session was organized by Florian Ueberbacher (University of Zurich), Giuseppe Delmestri (WU Vienna University of Economics and Business), and Elizabeth Goodrick (Florida Atlantic University). Presenters included: Joel Gehman (University of Alberta), Johanna Mair (Hertie School of Governance), Kamal Munir (University of Cambridge), and Florian Ueberbacher (University of Zurich). Discussants were Royston Greenwood (University of Alberta) and Andreas Georg Scherer (University of Zurich). The Symposium was sponsored by the Organization and Management Theory Division and co-sponsored by the Social Issues in Management and Critical Management Studies Divisions.
Abstract:
This symposium focuses on how grand challenges can be addressed by taking an institutional theory perspective, and it puts particular emphasis on the role of power in this regard. Taking an institutional theory lens for the study of grand challenges has the advantage of both improving our understanding of grand challenges and of further developing theory. In this symposium, we will focus on the role of investigating different forms of power for making institutional theory particularly appropriate and relevant for the study of grand challenges. In particular, we will critically evaluate (1) what the types and constellations of systemic forms of power are that underpin grand challenges, and (2) what types and constellations of episodic forms are necessary to tame and resolve grand challenges. The format we are proposing will create an environment of active debate among scholars from the fields of organization theory, business and society, and critical management studies.
Perspectives on Risk, AAPG-CSPG Conference PresentationJoel Gehman
These slides accompanied a talk I gave at the AAPG-CSPG Oil Sands & Heavy Oil Symposium: A Local to Global Multidisciplinary Collaboration in Calgary, Alberta, Canada, October 14-16, 2014.
The slides complement the related working paper: "Perspectives on Risk: From Techno-Economic Calculations to Socio-Cultural Meanings," available at http://ssrn.com/abstract=2508488
Abstract:
Recent news reports have been filled with debates concerning energy and environment risks. For instance, are nuclear power plants safe, or should they be modified in light of the Fukushima Daiichi disaster? Is hydraulic fracturing a technological breakthrough for natural gas extraction, or a threat to humans and the environment? Is the proposed Keystone XL transnational pipeline an economic boon, or an environmental bane? More fundamentally, is climate change a problem, and if so, what should be done about it? Given questions such as these, risk has become a central concern for businesses, regulators and communities. At one level, risk is about calculations and numbers. When it first emerged in the seventeenth century, risk was related to gambling; risk meant the chances of an event occurring, and the magnitude of the losses or winnings it might bring. Such understandings are evident in technical perspectives on risk, including engineering, epidemiology, toxicology and probabilistic assessments, as well as in economic perspectives on risk, including finance, actuarial and insurance approaches. But at another level, risk is about narratives and meanings. We all live in a “risk society”; risk is a way of framing debates and giving meaning to what is at stake. Such understandings are evident in perceptual perspectives on risk, including psychological, behavioral, and public opinion studies, as well as in cultural perspectives, including sociology, political science and conflict resolution approaches. In this paper, we argue that understanding energy and environment risks requires bringing together both numbers and narratives. Only by going beyond techno-economic conceptualizations of risk and considering socio-cultural issues can we explain risk within and across societies, whether at a given point in time, or dynamically over time.
Water Initiative 2014 Canadian Water Network Research Project OverviewJoel Gehman
The Water Initiative research team is preparing an
extensive comparative, multidisciplinary review of
hydraulic fracturing wastewater handling, treatment,
reuse and disposal by comparing the operating
practices, regulatory policies and stakeholder concerns
that have emerged in various jurisdictions regulating
unconventional shale formations in North America. The
research will identify key knowledge gaps and enable
private and public sector decision-makers to develop
specific research approaches to directly address these
identified knowledge gaps.
Adjusting OpenMP PageRank : SHORT REPORT / NOTESSubhajit Sahu
For massive graphs that fit in RAM, but not in GPU memory, it is possible to take
advantage of a shared memory system with multiple CPUs, each with multiple cores, to
accelerate pagerank computation. If the NUMA architecture of the system is properly taken
into account with good vertex partitioning, the speedup can be significant. To take steps in
this direction, experiments are conducted to implement pagerank in OpenMP using two
different approaches, uniform and hybrid. The uniform approach runs all primitives required
for pagerank in OpenMP mode (with multiple threads). On the other hand, the hybrid
approach runs certain primitives in sequential mode (i.e., sumAt, multiply).
Analysis insight about a Flyball dog competition team's performanceroli9797
Insight of my analysis about a Flyball dog competition team's last year performance. Find more: https://github.com/rolandnagy-ds/flyball_race_analysis/tree/main
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Round table discussion of vector databases, unstructured data, ai, big data, real-time, robots and Milvus.
A lively discussion with NJ Gen AI Meetup Lead, Prasad and Procure.FYI's Co-Found
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...John Andrews
SlideShare Description for "Chatty Kathy - UNC Bootcamp Final Project Presentation"
Title: Chatty Kathy: Enhancing Physical Activity Among Older Adults
Description:
Discover how Chatty Kathy, an innovative project developed at the UNC Bootcamp, aims to tackle the challenge of low physical activity among older adults. Our AI-driven solution uses peer interaction to boost and sustain exercise levels, significantly improving health outcomes. This presentation covers our problem statement, the rationale behind Chatty Kathy, synthetic data and persona creation, model performance metrics, a visual demonstration of the project, and potential future developments. Join us for an insightful Q&A session to explore the potential of this groundbreaking project.
Project Team: Jay Requarth, Jana Avery, John Andrews, Dr. Dick Davis II, Nee Buntoum, Nam Yeongjin & Mat Nicholas
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Discussion on Vector Databases, Unstructured Data and AI
https://www.meetup.com/unstructured-data-meetup-new-york/
This meetup is for people working in unstructured data. Speakers will come present about related topics such as vector databases, LLMs, and managing data at scale. The intended audience of this group includes roles like machine learning engineers, data scientists, data engineers, software engineers, and PMs.This meetup was formerly Milvus Meetup, and is sponsored by Zilliz maintainers of Milvus.
Global Situational Awareness of A.I. and where its headedvikram sood
You can see the future first in San Francisco.
Over the past year, the talk of the town has shifted from $10 billion compute clusters to $100 billion clusters to trillion-dollar clusters. Every six months another zero is added to the boardroom plans. Behind the scenes, there’s a fierce scramble to secure every power contract still available for the rest of the decade, every voltage transformer that can possibly be procured. American big business is gearing up to pour trillions of dollars into a long-unseen mobilization of American industrial might. By the end of the decade, American electricity production will have grown tens of percent; from the shale fields of Pennsylvania to the solar farms of Nevada, hundreds of millions of GPUs will hum.
The AGI race has begun. We are building machines that can think and reason. By 2025/26, these machines will outpace college graduates. By the end of the decade, they will be smarter than you or I; we will have superintelligence, in the true sense of the word. Along the way, national security forces not seen in half a century will be un-leashed, and before long, The Project will be on. If we’re lucky, we’ll be in an all-out race with the CCP; if we’re unlucky, an all-out war.
Everyone is now talking about AI, but few have the faintest glimmer of what is about to hit them. Nvidia analysts still think 2024 might be close to the peak. Mainstream pundits are stuck on the wilful blindness of “it’s just predicting the next word”. They see only hype and business-as-usual; at most they entertain another internet-scale technological change.
Before long, the world will wake up. But right now, there are perhaps a few hundred people, most of them in San Francisco and the AI labs, that have situational awareness. Through whatever peculiar forces of fate, I have found myself amongst them. A few years ago, these people were derided as crazy—but they trusted the trendlines, which allowed them to correctly predict the AI advances of the past few years. Whether these people are also right about the next few years remains to be seen. But these are very smart people—the smartest people I have ever met—and they are the ones building this technology. Perhaps they will be an odd footnote in history, or perhaps they will go down in history like Szilard and Oppenheimer and Teller. If they are seeing the future even close to correctly, we are in for a wild ride.
Let me tell you what we see.
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Subhajit Sahu
Abstract — Levelwise PageRank is an alternative method of PageRank computation which decomposes the input graph into a directed acyclic block-graph of strongly connected components, and processes them in topological order, one level at a time. This enables calculation for ranks in a distributed fashion without per-iteration communication, unlike the standard method where all vertices are processed in each iteration. It however comes with a precondition of the absence of dead ends in the input graph. Here, the native non-distributed performance of Levelwise PageRank was compared against Monolithic PageRank on a CPU as well as a GPU. To ensure a fair comparison, Monolithic PageRank was also performed on a graph where vertices were split by components. Results indicate that Levelwise PageRank is about as fast as Monolithic PageRank on the CPU, but quite a bit slower on the GPU. Slowdown on the GPU is likely caused by a large submission of small workloads, and expected to be non-issue when the computation is performed on massive graphs.