This document describes a radiologist's experiment adding two additional monitors to reading workstations to improve workflow efficiency. The radiologist was able to add two 30-inch, 4 megapixel monitors, a video card, and free software to existing workstations for under $1,000. This doubled the screen real estate without increasing costs. In a trial period, radiologists reported major improvements to ergonomics, ability to compare multiple datasets simultaneously, and opening multiple applications at once. The radiologists estimated this setup could save 30 seconds per case and concluded it was a cost-effective way to improve efficiency without additional spending.
Technology Update: The More Things Change, the More Fun It GetsMichael Carnell
This presentation was delivered to the Institute of Management Accountants in Charleston, South Carolina. It includes information on making sure your current IT position is sable as well as making plans for the future.
4 Best Practices for Patch Management in Education ITKaseya
This document discusses best practices for patch management. It begins with an introduction and speakers. It then discusses challenges with manual patching processes and how the time between patch release and exploits is shrinking. The next sections outline 4 best practices for patch management: 1) discover and assess systems, 2) identify and test patches, 3) evaluate and plan patch deployment, and 4) deploy and remediate patches. An additional best practice of automating patch management is recommended to reduce costs and improve productivity. The document concludes with information about Kaseya as a patch management solution provider.
Keynote given at BOSC, 2010.
Does the hype surrounding cloud match the reality?
Can we use them to solve the problems in provisioning IT services to support next-generation sequencing?
The Intel® Xeon® processor E5-2600 v2 product family enabled healthcare company Sectra to stream 3D medical images to a wider range of devices, allowing more clinicians to access images. Sectra optimized its software to generate and stream images from servers equipped with the Intel processors. Tests showed the new solution could support 35% more concurrent users and delivered a 50% performance increase over the previous generation processors. This improved performance allows hospitals to consolidate servers and reduce hardware costs while providing medical images to more clinicians across the hospital.
You are already the Duke of DevOps: you have a master in CI/CD, some feature teams including ops skills, your TTM rocks ! But you have some difficulties to scale it. You have some quality issues, Qos at risk. You are quick to adopt practices that: increase flexibility of development and velocity of deployment. An urgent question follows on the heels of these benefits: how much confidence we can have in the complex systems that we put into production? Let’s talk about the next hype of DevOps: SRE, error budget, continuous quality, observability, Chaos Engineering.
The document summarizes business continuity, disaster recovery, and data backup. It discusses the differences between business continuity planning, disaster recovery planning, and backups. It also provides statistics on data loss and discusses factors like recovery point objectives and recovery time objectives. The document compares tape backups to disk backups and outlines costs associated with data protection and backup solutions.
An Approach to Automated Techniques for Data Extraction and Integrity Validat...nationaldosimetry
The document proposes a semi-automated method for extracting and analyzing data from CT dosimetry reports through optical character recognition to address limitations of current audit techniques. Key steps include collecting DICOM dose report files, using OCR software to convert bitmap images to text, extracting data from DICOM headers, merging information into a database, and analyzing with statistical tools. This allows faster, more comprehensive audits of patient radiation exposure compared to manual methods.
Patch Management: 4 Best Practices and More for Today's Healthcare ITKaseya
1) The document discusses best practices for patch management in healthcare IT, including discovering and assessing systems to identify needed patches, identifying and testing patches, evaluating and planning patch deployment, and deploying and remediating patches.
2) It recommends automating patch management for reduced costs, improved productivity and system performance. Automation can assess systems, identify new patches, evaluate needed patches, schedule deployments, and create reports.
3) The document is a presentation about patch management solutions from Kaseya, an IT automation company, and promotes their product for comprehensive, scalable, and affordable automated patch management.
Technology Update: The More Things Change, the More Fun It GetsMichael Carnell
This presentation was delivered to the Institute of Management Accountants in Charleston, South Carolina. It includes information on making sure your current IT position is sable as well as making plans for the future.
4 Best Practices for Patch Management in Education ITKaseya
This document discusses best practices for patch management. It begins with an introduction and speakers. It then discusses challenges with manual patching processes and how the time between patch release and exploits is shrinking. The next sections outline 4 best practices for patch management: 1) discover and assess systems, 2) identify and test patches, 3) evaluate and plan patch deployment, and 4) deploy and remediate patches. An additional best practice of automating patch management is recommended to reduce costs and improve productivity. The document concludes with information about Kaseya as a patch management solution provider.
Keynote given at BOSC, 2010.
Does the hype surrounding cloud match the reality?
Can we use them to solve the problems in provisioning IT services to support next-generation sequencing?
The Intel® Xeon® processor E5-2600 v2 product family enabled healthcare company Sectra to stream 3D medical images to a wider range of devices, allowing more clinicians to access images. Sectra optimized its software to generate and stream images from servers equipped with the Intel processors. Tests showed the new solution could support 35% more concurrent users and delivered a 50% performance increase over the previous generation processors. This improved performance allows hospitals to consolidate servers and reduce hardware costs while providing medical images to more clinicians across the hospital.
You are already the Duke of DevOps: you have a master in CI/CD, some feature teams including ops skills, your TTM rocks ! But you have some difficulties to scale it. You have some quality issues, Qos at risk. You are quick to adopt practices that: increase flexibility of development and velocity of deployment. An urgent question follows on the heels of these benefits: how much confidence we can have in the complex systems that we put into production? Let’s talk about the next hype of DevOps: SRE, error budget, continuous quality, observability, Chaos Engineering.
The document summarizes business continuity, disaster recovery, and data backup. It discusses the differences between business continuity planning, disaster recovery planning, and backups. It also provides statistics on data loss and discusses factors like recovery point objectives and recovery time objectives. The document compares tape backups to disk backups and outlines costs associated with data protection and backup solutions.
An Approach to Automated Techniques for Data Extraction and Integrity Validat...nationaldosimetry
The document proposes a semi-automated method for extracting and analyzing data from CT dosimetry reports through optical character recognition to address limitations of current audit techniques. Key steps include collecting DICOM dose report files, using OCR software to convert bitmap images to text, extracting data from DICOM headers, merging information into a database, and analyzing with statistical tools. This allows faster, more comprehensive audits of patient radiation exposure compared to manual methods.
Patch Management: 4 Best Practices and More for Today's Healthcare ITKaseya
1) The document discusses best practices for patch management in healthcare IT, including discovering and assessing systems to identify needed patches, identifying and testing patches, evaluating and planning patch deployment, and deploying and remediating patches.
2) It recommends automating patch management for reduced costs, improved productivity and system performance. Automation can assess systems, identify new patches, evaluate needed patches, schedule deployments, and create reports.
3) The document is a presentation about patch management solutions from Kaseya, an IT automation company, and promotes their product for comprehensive, scalable, and affordable automated patch management.
Prometheus is a next-generation monitoring system. It lets you see you not just what your systems look like from the outside, but also gives visibility into the internals and business aspects of your systems. This allows everyone to benefit, including both operations and developers. This talk will look at the concepts behind monitoring with Prometheus, how it's designed, why it's suitable for Cloud Native environments and how you can get involved.
The document discusses challenges for control system design and implementation on retrofit and automation projects. Meeting requirements from corporate, plant maintenance, and operations can be difficult. Involving operators and using real-time simulations for training are important. Defining control functionality clearly using standards like ISA-88 is also key, as is improving process definitions and thoroughly understanding electrical systems when retrofitting. Benefits of successful projects include increased capacity, quality, and flexibility.
IT Performance Management Handbook for CIOsVikram Ramesh
Learn why measuring performance on individual devices and systems often leaves admins flying blind when it comes to SLA management and identifying performance bottlenecks. This in-depth e-Guide talks about how VirtualWisdom4 can give administrators a live, up- to-the-second view across the system-wide IT infrastructure.
Patch Management: 4 Best Practices and More for Today’s Banking IT LeadersKaseya
This document summarizes a webcast about patch management best practices for banking IT leaders. It discusses challenges with manual patching processes and costs associated with them. It then outlines 4 best practices for patch management: 1) discover and assess systems, 2) identify and test patches, 3) evaluate and plan patch deployment, and 4) deploy and remediate patches. An additional best practice of automating patch management processes is recommended to reduce costs and improve productivity. The presentation then provides an overview of Kaseya's patch management automation solutions and customer examples.
The document discusses computer hardware and software replacement recommendations from two sources. It recommends replacing computers every three years due to obsolete software and new models. Hardware updates should include an Intel dual-core processor, 2GB RAM, and larger hard disks. Software experts recommend Windows 7 and updated application software like Office, PHP, Java, and antivirus software. Further readings suggest large companies replace entire inventories every 2-3 years through staggered replacement of one-third annually.
This document discusses the role and responsibilities of a network administrator. A network administrator is responsible for maintaining computer hardware and software that makes up a computer network. This includes tasks like installing and upgrading software and hardware, troubleshooting issues, monitoring systems for problems, adding/deleting user accounts, transferring computers, and documenting activities. The document outlines some of the common issues network administrators deal with and provides examples of specific troubleshooting scenarios. It also discusses the importance of backup, security updates, and reliability in computer networks.
The document provides tips for building a scalable and high-performance website, including using caching, load balancing, and monitoring. It discusses horizontal and vertical scalability, and recommends planning, testing, and version control. Specific techniques mentioned include static content caching, Memcached, and the YSlow performance tool.
TidalScale has created a software defined computer.
At TidalScale, we have created a simple cost-effective way for a data scientist, an analyst, an engineer, a scientist, a database administrator, or a software developer to access a group of servers through a single operating system instance as if it were a single supercomputer. This dramatically simplifies development, while reducing software scaling complexity not to mention a dramatic cost saving in hardware and software.
We configure hosted hardware into one or more TidalPods. Each TidalPod is a virtual supercomputer comprising a set of commodity servers configured with the TidalScale HyperKernel. What the user sees is standard Linux, FreeBSD or Windows running with the sum of all memory, processors, networks, and I/O. The secret sauce is the HyperKernel that fools the guest OS into thinking it’s running directly on a huge, expensive machine when in fact it’s running on a set of smaller, less expensive servers.
We offer an incredibly simple user experience.
• Define the computer size you want (Number of CPU, Amount of Memory), boot the virtual machine, then login to the computer…
Thus, we enable a simple cost-effective way for a data scientist, an analyst, an engineer, a scientist, a database administrator, or a software developer to access a group of servers in a Datacenter through a single operating system instance as if it were a single supercomputer. This dramatically simplifies development, while reducing software scaling complexity not to mention a dramatic cost saving in hardware and software.
AmericanEHR Webinar - Selecting the Right Hardware for your EHR-based PracticeCientis Technologies
Choosing between desktops, laptops or tablets are just some of the choices you face when selecting hardware for your EHR-based practice. There are many questions one needs to answer including the type of EHR software you have selected, the office network and the physical space available.
On July, 20th, 2010 IBM announced the IBM TS7610 ProtecTIER® Deduplication Appliance Express, a complete deduplicated storage subsystem for Small Medium Enterprises (SMEs) and remote offices. The new subsystem is the newest and smallest member of the ProtecTIER series–a leading enterprise-suitable deduplication technology, which IBM acquired from Diligent Technologies in 2008 and continues to develop and enhance at a remarkable pace. The TS7610 uses the same ProtecTIER software found in their larger TS7650 solutions, has the same ProtecTIER functionality, is pre-configured (ready to use) and offers very competitive CapEx and OpEx pricing. Learn More: http://ibm.co/ONeH7m
The document discusses the role and responsibilities of a network administrator. It describes how computers are now integrated into many organizational systems and must operate reliably to support functions like phone systems, heating, and more. As a network administrator, key responsibilities include installing and upgrading software and hardware, troubleshooting issues, monitoring systems for security and performance, and ensuring critical infrastructure like servers and backups are functioning properly. Attention to detail, ability to think ahead and consider many scenarios, and strong documentation skills are important for the job.
An Introduction to Prometheus (GrafanaCon 2016)Brian Brazil
Often what you monitor and get alerted on is defined by your tools, rather than what makes the most sense to you and your organisation. Alerts on metrics such as CPU usage which are noisy and rarely spot real problems, while outages go undetected. Monitoring systems can also be challenging to maintain, and overall provide a poor return on investment.
In the past few years several new monitoring systems have appeared with more powerful semantics and which are easier to run, which offer a way to vastly improve how your organisation operates and prepare you for a Cloud Native environment. Prometheus is one such system. This talk will look at the monitoring ideal and how whitebox monitoring with a time series database, multi-dimensional labels and a powerful querying/alerting language can free you from midnight pages.
The document discusses the Sanger Institute's experiences with moving genomic research workloads and data to the cloud. Key points include:
- Moving an existing web application to AWS mirrors improved performance but required significant code changes.
- Running genomic analysis pipelines and databases on AWS was more cost effective than traditional colocation, though software configuration took effort.
- Large data transfer speeds over the public internet pose challenges for moving terabytes of sequencing data to cloud resources.
- Security, data governance, and funding models require careful consideration when using cloud services for sensitive genomic and medical data. Private clouds may help address some issues.
1. The document discusses 5 key topics about disaster recovery planning: mixed platforms in data centers, virtualization, cloud computing, cost savings through virtualization, and the importance of testing disaster recovery plans.
2. Virtualization can simplify disaster recovery planning by allowing a single virtual recovery platform to protect all workloads regardless of physical or virtual servers and operating systems.
3. Cloud computing is well-suited for disaster recovery needs due to its on-demand, elastic resource model that handles unexpected resource demands.
A comparison to a previous-generation workstation and a current-generation competitor
Saving time at any point in the machine learning process can prove invaluable. Our medical 3D imaging, NLP, and computer vision tests show that the HP Z8 Fury G5 classified more samples in less time than the other workstations we tested. With these time savings, scientists, analysts, and engineers could act on valuable insights sooner, integrating them into technologies, diagnoses, or business strategies.
How far have you got with learning about Cloud? Got your head around Platform as a Service? Understand what IaaS means? Can spell Docker? Working in a DevOps mode? It’s easy to focus on learning new technology but it’s time to take a step back and look at what the technical implications are when an application is heading to the cloud. In the world of the cloud the benefits are high but the economics (financial and technical) can be radically different. Learn more about these new realities and how they can change application design, deployment and support. The introduction of Cloud technologies and its rapid adoption creates new opportunities and challenges. Whether designer, developer or tester, this talk will help you to start thinking differently about Java and the Cloud.
Presented at JAX DE, 2016
This slide contains a brief presentation of how Organizations can leverage Cloud to virtualize functional/performance testing and cost benefit from investing in hardware.
Prometheus is a next-generation monitoring system. It lets you see you not just what your systems look like from the outside, but also gives visibility into the internals and business aspects of your systems. This allows everyone to benefit, including both operations and developers. This talk will look at the concepts behind monitoring with Prometheus, how it's designed, why it's suitable for Cloud Native environments and how you can get involved.
The document discusses challenges for control system design and implementation on retrofit and automation projects. Meeting requirements from corporate, plant maintenance, and operations can be difficult. Involving operators and using real-time simulations for training are important. Defining control functionality clearly using standards like ISA-88 is also key, as is improving process definitions and thoroughly understanding electrical systems when retrofitting. Benefits of successful projects include increased capacity, quality, and flexibility.
IT Performance Management Handbook for CIOsVikram Ramesh
Learn why measuring performance on individual devices and systems often leaves admins flying blind when it comes to SLA management and identifying performance bottlenecks. This in-depth e-Guide talks about how VirtualWisdom4 can give administrators a live, up- to-the-second view across the system-wide IT infrastructure.
Patch Management: 4 Best Practices and More for Today’s Banking IT LeadersKaseya
This document summarizes a webcast about patch management best practices for banking IT leaders. It discusses challenges with manual patching processes and costs associated with them. It then outlines 4 best practices for patch management: 1) discover and assess systems, 2) identify and test patches, 3) evaluate and plan patch deployment, and 4) deploy and remediate patches. An additional best practice of automating patch management processes is recommended to reduce costs and improve productivity. The presentation then provides an overview of Kaseya's patch management automation solutions and customer examples.
The document discusses computer hardware and software replacement recommendations from two sources. It recommends replacing computers every three years due to obsolete software and new models. Hardware updates should include an Intel dual-core processor, 2GB RAM, and larger hard disks. Software experts recommend Windows 7 and updated application software like Office, PHP, Java, and antivirus software. Further readings suggest large companies replace entire inventories every 2-3 years through staggered replacement of one-third annually.
This document discusses the role and responsibilities of a network administrator. A network administrator is responsible for maintaining computer hardware and software that makes up a computer network. This includes tasks like installing and upgrading software and hardware, troubleshooting issues, monitoring systems for problems, adding/deleting user accounts, transferring computers, and documenting activities. The document outlines some of the common issues network administrators deal with and provides examples of specific troubleshooting scenarios. It also discusses the importance of backup, security updates, and reliability in computer networks.
The document provides tips for building a scalable and high-performance website, including using caching, load balancing, and monitoring. It discusses horizontal and vertical scalability, and recommends planning, testing, and version control. Specific techniques mentioned include static content caching, Memcached, and the YSlow performance tool.
TidalScale has created a software defined computer.
At TidalScale, we have created a simple cost-effective way for a data scientist, an analyst, an engineer, a scientist, a database administrator, or a software developer to access a group of servers through a single operating system instance as if it were a single supercomputer. This dramatically simplifies development, while reducing software scaling complexity not to mention a dramatic cost saving in hardware and software.
We configure hosted hardware into one or more TidalPods. Each TidalPod is a virtual supercomputer comprising a set of commodity servers configured with the TidalScale HyperKernel. What the user sees is standard Linux, FreeBSD or Windows running with the sum of all memory, processors, networks, and I/O. The secret sauce is the HyperKernel that fools the guest OS into thinking it’s running directly on a huge, expensive machine when in fact it’s running on a set of smaller, less expensive servers.
We offer an incredibly simple user experience.
• Define the computer size you want (Number of CPU, Amount of Memory), boot the virtual machine, then login to the computer…
Thus, we enable a simple cost-effective way for a data scientist, an analyst, an engineer, a scientist, a database administrator, or a software developer to access a group of servers in a Datacenter through a single operating system instance as if it were a single supercomputer. This dramatically simplifies development, while reducing software scaling complexity not to mention a dramatic cost saving in hardware and software.
AmericanEHR Webinar - Selecting the Right Hardware for your EHR-based PracticeCientis Technologies
Choosing between desktops, laptops or tablets are just some of the choices you face when selecting hardware for your EHR-based practice. There are many questions one needs to answer including the type of EHR software you have selected, the office network and the physical space available.
On July, 20th, 2010 IBM announced the IBM TS7610 ProtecTIER® Deduplication Appliance Express, a complete deduplicated storage subsystem for Small Medium Enterprises (SMEs) and remote offices. The new subsystem is the newest and smallest member of the ProtecTIER series–a leading enterprise-suitable deduplication technology, which IBM acquired from Diligent Technologies in 2008 and continues to develop and enhance at a remarkable pace. The TS7610 uses the same ProtecTIER software found in their larger TS7650 solutions, has the same ProtecTIER functionality, is pre-configured (ready to use) and offers very competitive CapEx and OpEx pricing. Learn More: http://ibm.co/ONeH7m
The document discusses the role and responsibilities of a network administrator. It describes how computers are now integrated into many organizational systems and must operate reliably to support functions like phone systems, heating, and more. As a network administrator, key responsibilities include installing and upgrading software and hardware, troubleshooting issues, monitoring systems for security and performance, and ensuring critical infrastructure like servers and backups are functioning properly. Attention to detail, ability to think ahead and consider many scenarios, and strong documentation skills are important for the job.
An Introduction to Prometheus (GrafanaCon 2016)Brian Brazil
Often what you monitor and get alerted on is defined by your tools, rather than what makes the most sense to you and your organisation. Alerts on metrics such as CPU usage which are noisy and rarely spot real problems, while outages go undetected. Monitoring systems can also be challenging to maintain, and overall provide a poor return on investment.
In the past few years several new monitoring systems have appeared with more powerful semantics and which are easier to run, which offer a way to vastly improve how your organisation operates and prepare you for a Cloud Native environment. Prometheus is one such system. This talk will look at the monitoring ideal and how whitebox monitoring with a time series database, multi-dimensional labels and a powerful querying/alerting language can free you from midnight pages.
The document discusses the Sanger Institute's experiences with moving genomic research workloads and data to the cloud. Key points include:
- Moving an existing web application to AWS mirrors improved performance but required significant code changes.
- Running genomic analysis pipelines and databases on AWS was more cost effective than traditional colocation, though software configuration took effort.
- Large data transfer speeds over the public internet pose challenges for moving terabytes of sequencing data to cloud resources.
- Security, data governance, and funding models require careful consideration when using cloud services for sensitive genomic and medical data. Private clouds may help address some issues.
1. The document discusses 5 key topics about disaster recovery planning: mixed platforms in data centers, virtualization, cloud computing, cost savings through virtualization, and the importance of testing disaster recovery plans.
2. Virtualization can simplify disaster recovery planning by allowing a single virtual recovery platform to protect all workloads regardless of physical or virtual servers and operating systems.
3. Cloud computing is well-suited for disaster recovery needs due to its on-demand, elastic resource model that handles unexpected resource demands.
A comparison to a previous-generation workstation and a current-generation competitor
Saving time at any point in the machine learning process can prove invaluable. Our medical 3D imaging, NLP, and computer vision tests show that the HP Z8 Fury G5 classified more samples in less time than the other workstations we tested. With these time savings, scientists, analysts, and engineers could act on valuable insights sooner, integrating them into technologies, diagnoses, or business strategies.
How far have you got with learning about Cloud? Got your head around Platform as a Service? Understand what IaaS means? Can spell Docker? Working in a DevOps mode? It’s easy to focus on learning new technology but it’s time to take a step back and look at what the technical implications are when an application is heading to the cloud. In the world of the cloud the benefits are high but the economics (financial and technical) can be radically different. Learn more about these new realities and how they can change application design, deployment and support. The introduction of Cloud technologies and its rapid adoption creates new opportunities and challenges. Whether designer, developer or tester, this talk will help you to start thinking differently about Java and the Cloud.
Presented at JAX DE, 2016
This slide contains a brief presentation of how Organizations can leverage Cloud to virtualize functional/performance testing and cost benefit from investing in hardware.
1. www.postersession.com
QUESTION
How could a radiologist produce at twice the rate for half the cost of the
usual default hardware/software workstation?
ANSWER
A Six Monitor Workstation/paradigm shift in a busy imaging department.
WE WOULD LIKE TO SHARE OUR STORY WITH YOU
We were nagged by a visceral discomfort of the workflow. Here's our story
but go see the station itself.
Historical evolution of the monitor driven workstation:
Some radiologists at the transition to digital imaging erroneously assumed that
the digital workstation should approximate the analog eight panel world. It soon
became evident that with hundreds of images it was much better to roll the stack
on one monitor, not to mention avoiding a prohibitive monitor expense.
Current workflow dilemma:
The improvement in data throughput and display software has evolved not
towards sequencing a single static image on single stack but rather real time
fast-moving orthogonal simultaneous moving data . Review has become a near
cinematic comparative exercise. And, yet like the chest x-ray of old, there may
be three or four time points. Furthermore, aside from our basic PACS system,
there are evolving different specialty imaging applications such as coronary CT
angiography, CT/PET fusion imaging, nuclear medicine cardiac imaging, and
complex reconstructive real-time data analysis, dynamic and functional data
analysis fused on imaging data. Add to this the complex mix of multiple PACSs
outside our system from the cloud, part of the increasingly integrated healthcare
enterprise approach. Gone are the days of “when old films arrive an addendum
report will follow”. The 1000 images of a CT are the modern equivalent of a 2-
view CXR needing comparison to multiple prior readily available 1000 images
exams.
Problem:
There is limited capital for investment and IT support is predominantly to maintain
current systems. ”Make sure it doesn’t crash!” “Clean up that server!” The
radiologist can “drive” change to improve flow because we are driving. We suffer
with anti flow. In fact the response to the radiologist often is “if it ain’t broke don’t
fix it”. The latter is a crisis intervention paradigm and not a proactive process
improvement paradigm. We espouse the latter.
Sidelight:
There is a literature of "Flow". Flow states are those that develop out of a smooth
sense of progress. Most standard current hardware/software PACS models need
to be tweaked to support front end user and when sub optimally configured
become anti-flow.
THE OLD DAYS TIME IS MONEY CAN WE RECONFIGURE?
In the old days all we did was look.
Analysts understand that a missed detail or a second delay /click/window could
mean millions. Gamers understand as well and they would lose the game.
Do we not have a time sensitive mission-critical unique job? “Sometimes
custom designs, sizes, looks are called for to meet customer
requirements”.
HYPOTHESIS
Improve efficiency with an eye towards providing more real estate for
comparison of complex data sets at multiple time points in real time.
Without closing windows more monitors will improve flow if seamlessly
integrated.
Background Results/Conclusion
The Six Monitor Workstation-Twice the Flow for Half the Cost
John Mukai, M.D., David Bader, M.D. FACR, Alexander Rende, M.D.
St. Vincent Hospital, 123 Summer St, Worcester, MA 01608
TechnicalHardware:
1. Monitors: A pair of Four Megapixel 30 inch monitors (Yamakasi,
Korea). Under $400 each including cables. We found these comparable
to Dell 30 inch monitors costing $1000 each, (3x more than Yamakasi
monitors)
2. Video card: Nvidia GeForce 640 dual DVI. Needed for a pair of four
megapixel monitors. Under $100. GeForce 630 can be used for two
megapixel monitors (1920 x 1200).
3. Computer: the authors have done this on many different computers
and because of the low resource intensive methods (remote
administrator is like thin client, the heavy lifting is done by the remote
thick client) our configuration has worked on windows XP with four gig of
RAM and a standard workstation such as HP Z 600, originally windows
XP 32 bit then XP 64bit, and the most recent enterprise rollout Windows
7 64-bit with 16 gig of RAM on a Dell Precision T3610 .
4. Total cost: under $1,000.(Saves $5,000 not purchasing a second
PC). Saves $20K to $50 K due to decrease software license costs.
This is not only a cost effective expansion of real-estate, the global
department hardware and software budget decreases. There are very
few opportunities to spend less money and improve the work life of
a radiologist! We strongly believe this is one!
Procedure:
1. Install video card into second PCIE slot.
2. Attach cables Dual DVI cables included with monitors.
3. Launch Pacs application.
4. Total time under 15 minutes.
Additional free software required:
1. TURBOTOP. Freeware. This is a key item. Allows control of the
Windows in the foreground with the PET application defaulted to the
foreground. (For example stays on top of the main PACS application)
2. REMOTE ADMINISTRATOR SOFTWARE. (hospital enterprise
license) This this essentially enables the add-on monitors to be virtual
viewers to any other computer in the department our administrative
offices etc.
3. KATMOUSE -this is also freeware that allows toggling of the Z order
of overlapping imaging application data images .(Prevents the main app
from hogging the foreground )
Total cost of additional software: zero dollars.
RESULTS
The purpose of this endeavor was to do the following:
1. Develop a reasonable cost-effective hardware add-on to convert a 4 monitor
system to a 6 monitor system .
2. Experience the six monitor system over a 12 month trial to determine if there
is improved workflow.
Improvements:
1. Ergonomics. Standing and sitting options. Average 9/10.
2. Large data set multiple simultaneous time point comparisons. Average 7/10
3. View three more different imaging applications on one system: Average 9/10.
A thought exercise:
If there are half as many keyboards, mice clicks etc. conservatively assume we
could save 30 seconds sec/widget x 80 widgets/rad/day x 12 rad/ day working
an 8 hour shift = 8 rad hours/day saved or 1 full FTE/day of savings (either in
money or time which are equivalent).
Finally as workload goes up and reimbursement goes down, radiologist fatigue
and even burnout looms on the horizon. There is a tendency to call for more
FTE to saw the wood or process the widgets. Before that maneuver however
one needs to have the sharpest saw possible so that when the time comes
when more FTE is actually needed there is maximum bang for minimum buck.
Based on our preliminary experience and comparison we believe this system is
a much sharper saw. Also the sharpening is cost effective without the need for
as many specialty licenses/ thick clients saving at least $5000 per PC not
purchased not to mention saving on the license fee. Although this may be moot
if all apps move to thin client. Regardless this system will still be beneficial
relative to the ease of side by side comparison of all the data in the cloud.
CONCLUSION
1. Our hardware add-on is a simple and cost-effective method costing under
$1000, saving tens of thousands of dollars. Intsall takes 15 minutes without the
need for extreme IT skills.
2. There is markedly improved workflow. None of us would revert to a 4 monitor
system and to do so significantly impedes flow. We plan further substantiation
with appropriate volume, quality and turnaround time metrics on a larger scale.
3. Finally this method has the potential to save $20K to $50K department wide.
Thank you for your attention. Please visit the station for hands-on
demonstration.
Introduction
Our experience:
We have a busy multispecialty imaging practice in a very cost sensitive
environment. We experienced a rollout of an entire new PACS system
as a part of our hospital system national rollout. The configuration was
set at three monitors.
We modified the enterprise rollout to 4 monitors as we had been
experimenting with 4 or 5 with the previous vendor.
In parallel there was a growing PET and Cardiac CT volume. For
example, we were struggling with poor access to a single PET
workstation. Not only was it physically in a separate area, it needed a
separate mouse, desk and chair. One had to change workstations and
use 2 disparate systems.
Sidelight:
There is a literature on the psychology of "Flow". Flow states are those
that develop out of a smooth sense of progress. If anything our initial
systems were anti-flow.
Beta testing a simple cost-effective hardware solution:
We explored various inexpensive options hoping to spend no more than
$1000 to provide a 5th and 6th monitor. Our goal was to determine if a
simple robust solution was possible and if a preliminary test by a few
users warranted further rollout of same.