In 2012, FEMA gave a presentation on how FME was used to track, QC, and process data for FEMA's National Flood Hazard Layer, the regulatory database for flood hazard risk in the National Flood Insurance Program.
This new presentation demonstrates how FEMA has continued to use FME to increase the level of automation and reduce the amount of human input required in the process.
2. About Us
Eliza LedwellRob Gaines
IBM – FEMA Risk MAP
Customer and Data Services,
Managing Consultant
Eliza.ledwell@us.ibm.com
IBM – FEMA Risk MAP
Customer and Data Services,
Data Services Manager
rwgaines@us.ibm.com
3. FEMA Risk MAP
Risk MAP Vision:
• Work collaboratively with state & local entities
• Deliver quality data
• Increase public awareness
• Reduce risk to life & property
Mapping, Analysis, and Planning
4. FEMA Risk MAP
Customer & Data Services (CDS):
• IT Hosting
• Application development
• Communications & user support
• Consulting
Mapping, Analysis, and Planning
5. FEMA’s National Flood Hazard
Layer (NFHL)
• Widespread: all effective
Flood Insurance Rate Maps for
areas covered by digital data
• Updated daily: the single
most up-to-date source of
FEMA regulatory flood
hazard information
• Public-facing: NFHL web
services receive over 20
million hits per month
6. Organization and Display
The NFHL is organized into more than
50 data layers
The NFHL layers include:
• Flood hazard zones and labels
• Base Flood Elevations (BFEs)
• Cross-sections and coastal transects
• Revision information such as LOMR and
FIRM Panel boundaries
• Community boundaries and names
• Structures such as levees, hydraulic, and
others involved in flood control
7. Ways to Access NFHL Data
WFS
NFHL Status Page
Google Earth TM
WMS
REST
FEMA
GeoPlatform
MSC
11. How Does the NFHL Get Updated?
Receive
Data
Track Data QC Data Stage Data
Incorporate
Data
Extract
Data
12. How Does the NFHL Get Updated?
Receive
Data
Track Data QC Data Stage Data
Incorporate
Data
Extract
Data
FME
13. How Does the NFHL Get Updated?
Receive Data Track Data QC Data Stage Data Incorporate Data Extract Data
Manual data movement
• Data is received through
download links in an email
• Must be manually transferred to
secure DHS hosting environment
14. How Does the NFHL Get Updated?
Receive Data Track Data QC Data Stage Data Incorporate Data Extract Data
FME
Tracking – the data
undergoes basic integrity
checks and is logged into
the NFHL Tracking database
16. How Does the NFHL Get Updated?
Receive Data Track Data QC Data Stage Data Incorporate Data Extract Data
FME
QC – the data is checked for quality issues and rejected
if issues are found
17. How Does the NFHL Get Updated?
Receive Data Track Data QC Data Stage Data Incorporate Data Extract Data
FME
Staging – data is loaded to
the NFHL Staging Database
(an offline file geodatabase)
once it has passed QC
18. How Does the NFHL Get Updated?
Receive Data Track Data QC Data Stage Data Incorporate Data Extract Data
FME
Incorporation – data is
published to the live NFHL
database once it becomes
effective
19. How Does the NFHL Get Updated?
Receive Data Track Data QC Data Stage Data Incorporate Data Extract Data
FME
Extraction – data is extracted
out of the live NFHL database
to jurisdictional and state
datasets and made available on
www.msc.fema.gov
21. Job Management
Current Process Overview
Detect new
submission
files
File
Repository
Track
submission
information
Tracking Database
QC
submission
data
Stage
submission
data
Publish
datasets
Extract
datasets
File
Repository
NFHL DB
22. Old System vs. Current
Old
• Manual input of
parameters
• Manual running of
workspaces
Current
• Automatic detection of
new submission files
• Automatic job triggering
and population of
parameters
• Increased tracking
visibility
26. Automatic File Detection
1. Read in relevant files
2. Query log to see if files have been encountered
3. If new, record in log
Tracking DB
FME Read-In
ModelFile Repository
Downstream
Models
27. Automatic File Detection
Step 1:
Read in relevant files using Directory and File
Pathnames Reader pointed to base file
repository
• Identify relevant filetypes
28. Automatic File Detection
Step 2:
Query Tracking database to identify any datasets that have already
been ‘seen’ by the system
• Disregard any seen datasets
• Record any new datasets
29. Automatic File Detection
Step 3:
File information can now be used to run downstream processes and
associated with tracking entries
31. Job Management Basics
• Very useful for automating multi-stage
data processes
• Tracking database captures information
about each submission and records the
runtimes and results of each processing
phase
• Parent jobs query tracking database to
determine what work to perform, then
spawn child jobs to handle each dataset
• Child jobs perform work and write
results back to tracking database
Tracking DB
Parent JobChild JobsChild JobsChild Jobs
read
run
write
Submission
Data
32. Job Management
Current Process Overview
Detect new
submission
files
File
Repository
Track
submission
information
Tracking Database
QC
submission
data
Stage
submission
data
Publish
datasets
Extract
datasets
File
Repository
NFHL DB
33. Job Management Example: QC
Step 1:
Parent job queries tracking DB to determine datasets in need of QC
• Can use in-model logic or DB view/query
Query for datasets that
have been tracked but
not QCed
34. Job Management Example: QC
Step 2:
Parent job runs child job for each dataset
via WorkspaceRunner
• Use data elements from Tracking DB to set parameters
for each job
Use data elements to
set parameters
35. Job Management Example: QC
Step 3:
Write results back to Tracking DB with child job
• Write timestamped status to master table so that future QC jobs will not re-check the
same dataset
QC timestamp
37. Current System vs. New
Current (FME Desktop)
• Manual file download and
upload
• Transmission by email /
http download
• Manual resolution of QC
issues
New (FME Server)
• Web-based file upload
• Realtime QC results
• Fully automated
38. Job Management
Current Process Overview
Detect new
submission
files
File
Repository
Track
submission
information
Tracking Database
QC
submission
data
Stage
submission
data
Publish
datasets
Extract
datasets
File
Repository
NFHL DB
39. Extraction
Manager
Replication
Manager
Submission Manager (FME Server)
New Process Overview
Log
Submission
User
Upload
Tracking Database
Publish
datasets
Extract
datasets
File
Repository
NFHL DB
Track
submission
information
QC
submission
data
QC Results
File
Repository
40. New Process Lessons Learned
• FME Server plays very nice with parent/child job approach.
• It’s difficult to pass information between FME Server workspaces.
• Tracking Database is more important than ever with FME Server.
• It can be difficult to provide output reporting with the out-of-the-box FME Server
UI.
• Error handling is important.
• The Schema Reader is awesome for quality control.
43. The Future
• Expanded self-serve reporting options
• Additional data submission / QC processes
• Data integration web services
44. Thank you!
Robert Gaines
IBM – FEMA Risk MAP Customer and
Data Services, Data Services Manager
rwgaines@us.ibm.com
Eliza Ledwell
IBM – FEMA Risk MAP Customer and
Data Services, Managing Consultant
Eliza.ledwell@us.ibm.com