Category Archives: Data

Tufts Data Rescue

From 1/20-1/23 Tufts University and the Open Geoportal Community conducted a sizeable U.S. federal data harvest. We surveyed Tufts faculty to identify those geospatial data most critical their teaching and research. We then conducted a curated harvest to back up potentially at-risk federal, environmental and social justice geospatial data and associated tabular data.  Currently we are at around 20 TB of data backed up on Tufts Research Data Storage Network.  This is an ongoing effort as we are continuing to harvest data daily. Please contact if you would like to participate or learn more.

Here is a summary of the Tufts OGP Data Rescue as of 3/1/2017

Other Data Rescue Efforts Around the Nation

Nationwide Data Rescue at Penn with Calendar of Events – Penn CKAN repository

End of Term project

Environmental Data and Governance Initiative (EDGI)

Climate mirror

Azimuth climate data backup project:
Run by John Baez

Crawling Rockland Site

Data from the Rockland County (NY) GIS site is now available at  The following image shows some of the search results after a crawl based ingest of the Rockland data site:


Even though no web services may be available, these search results can still be previewed. The following image shows the Rockland boundaries for State Senate:


Here’s a screenshot with two separate bus routes previewed.


On the Rockland County site, many bus routes are stored in a single zip file.  These are individually searchable and previewable on WorldWideGeoWeb.

Directed crawling of new sites presents new challenges and reveals limitations in the existing code.  Several changes were made to successfully crawl Rockland, NY data at .

The Rockland site does not contain links to zip files.  A typical link to a data file is  The crawl code was changed change to support links to servlets rather than simple zip files.

The Rockland metadata files contain minimal, often cryptic titles; for example,“monsey2” and “TZX”.  These titles are not sufficient.  Fortunately, they can be augmented with information scraped from the crawled web page.  Specifically, the text from the anchor tag linking to DownloadData.jsp servlet is concatenated to the title field in the xml metadata file.  This creates user friendly titles, for example, “monsey2: TOR Bus Routes” and “TZX: Tappan Zee Express Bus Route”.

The Rockland site contains zip files that hold multiple shapefiles.  For example, the file contains 6 separate bus routes, each in a separate shapefile.  Each shapefile is ingested as a separate entity so it can be independently searched and previewed.  The 28 links on the Rockland data page expand into 47 searchable, previewable spatial resources.  Note that the OpenGeoPortal download operation pulls down entire shape files from the Rockland server, not the individual shapefiles.

Since the Rockland site only supports secure connections, the ingest code was enhanced to support https.

Crawling Westchester Site

At the NYC OGP meeting, I demoed my thesis site. Since there isn’t a video of it, here’s a write-up covering the same material. There are also some slides available.

The goal of my Masters thesis is to make all the world’s spatial data accessible. This goal is accomplished by expanding OpenGeoPortal in two significant ways. First, spatial data files are discovered via web crawls and then ingested. Second, the ability to preview and download layers without needing OGC protocols was developed. This expanded version of OpenGeoPortal is on the web at

Data on was discovered by crawling the web, relying exclusively on HTTP GET requests. This is the same technique used by Google and other search engines. The WorldWideGeoWeb crawler can be instructed to crawl a specific site. Sites are searched for links to zip files. The ingest code retrieves and unzips these files. If they contain a shapefile, the bounding box is determined using the shp and prj files. Any metadata file is also parsed. Information about each discovered layer is ingested into WorldWideGeoWeb’s Solr instance.

After ingest, OpenGeoPortal’s powerful search interface allows users to quickly and easily find spatial data layers. Preview of shapefiles on the map is based on parsing and rendering shapefiles entirely in JavaScript. It does not use image tiles from GeoServer or ArcGIS Server. When the user selects a layer to preview, the browser sends a request to the server to create a temporary, server-side copy of the zip file. During ingest, the URL of the zip file was stored in the Solr record. This is used and an HTTP GET request is issued to create local copy of the zip file. Then the file unzipped. At this point the browser requests the .shp, .shx, .prj and .dbf elements of the shapefile. They are processed in JavaScript as binary data streams. If the data is not in a suitable projection, it is reprojected on the browser. Then the features in the shapefile are parsed are rendered on OpenGeoPortal’s map. Attributes in the .dbf file are displayed as features are moused-over.

The following screenshot shows The search results were discovered by crawling Westchester County’s data web site at The map shows a preview for layer titled ”County Legislative Districts”. The browser debug panel at the bottom of the screenshot shows the network traffic generated by the preview request. The “cacheShapeFile.jsp” ajax call told the server to copy the shapefile from using an HTTP GET and unzip the results. After the Ajax request completes, the .shx, .shp, .dbf and .prj are requested by the browser and parsed in JavaScript. Transferring this 220 kilobyte layer first to the WorldWideGeoWeb server and then to the browser took just under 2 seconds.


The user can add any of these Westchester layers to the cart and download them. The zip files are transferred directly from the Westchester server to the browser. Clipping the data or converting it to another format are not supported.

WorldWideGeoWeb shows it is possible to build a powerful, interactive portal without requiring data holders create web services. Data only available on web sites designed for people can be ingested using a web crawl and previewed using advanced JavaScript techniques that weren’t available when the OGC protocols were created. Since WorldWideGeoWeb is built on OpenGeoPortal, data available with web services can also be supported.


There are significant limitations with the current version of the software. Most notable is its inability to deal with large shapefiles. Currently, shapefiles over one megabyte can cause the browser to hang. Search results are color-coded to advise the user. Green layers are small and should preview quickly. Yellow layers are larger but should preview without too much delay. Layers in red represent shapefiles over a megabyte and should not be previewed. Even these large layers can be downloaded easily downloaded, just not previewed.

My thesis code is not production quality.

Future Directions

During a crawl, only spatial resources in shapefiles are discovered and their associated metadata must be in FGDC or ISO19115. It would be trivial to add support for KML and KMZ files. Support for other file formats and metadata standards could also be integrated.

Crawling based OGC protocols such as Get Capabilities and CSW could be added.

The ranking of the search results could be based on the “Page Rank” of the page that linked to the zip file.

Semi-spatial data such as web pages about places could be ingested and searched spatially.

Other Notes

I now work for Voyager Search ( We are investigating how some of these ideas could be incorporated into their existing products. The code created for my thesis has been released under the GPL.

Some data an organization provides may be more critical or widely used. This data could be available via an OGC compliant server while other, less critical data is made available only via HTTP GET.