ArcGIS Pro 1.4 was released last week and hopefully some of you have already had a look at the new features. ArcGIS Pro now covers the workflows I typically use and my days of dipping back into ArcMap may be over. It also brings lots of new capabilities and options to the desktop user.Read More
Want to dive into Python scripting in ArcGIS 10.1 but don't feel like reading a long, involved blog post about how to get started? That's good, because I don't have time to write one!
The Python Window is an easy way to start experimenting with Python in ArcMap. You can click on tools in toolboxes and "drag and drop" them into the Python Window.
The ArcGIS Python Community page has quick links to presentations, videos, tutorials, and other helpful resources. This is invaluable to beginners and experts alike.
If you want to automate repetitive tasks involving map documents (MXDs), for example checking all the MXDs in a folder for broken data layers, you will be interested in the arcpy.mapping sample scripts.
If your work involves more ArcGIS Server administration, then the Server REST API administration scripts will be more useful to you.
Finally, one of the main new Python features at 10.1 is the Data Access module (called arcpy.da). This provides new improved cursor objects to access geodatabases that are more powerful (for example, they can control edit sessions) and much faster, often more than 5 times quicker than the old cursors. The old cursors are still available to support legacy applications.
One of the most common errors seen when using the Productivity Suite OS Data Converter is shown in the below example, relating to the inability to validate the Order Number:
The error is thrown because Productivity Suite is unable to process data with different order numbers at the same time. Productivity Suite is able to process multiple datasets/ files with the same order number, but does not support the mixing of order numbers during a single processing session. If you have data with different order numbers, you will be required to process them in separate order number batches.
The order number is the first set of digits in the file name:
Productivity Suite 3.0 contains a number of bug fixes and stability improvements, and has also been streamlined based on customer feedback. As part of this streamlining, the LocatorHub toolbar has been removed along with various other features and functions that have either been superceded by core ArcGIS for Desktop capabilities, or are no longer needed (please refer to the release notes for full details).
Customers wishing to connect to their local LocatorHub services in ArcGIS 10.1 for Desktop should therefore use the Esri UK Online Services toolbar.
NB It’s important to note that you must have a full license of LocatorHub to connect in the way described below (the runtime licenses of LocatorHub included in applications like LocalView Fusion don’t allow connection via the Esri UK Online Services toolbar).
To connect to a LocatorHub service you should now use the following workflow:
Download the Esri UK Online Services toolbar
- Unpack the contents of the ZIP file to the desired location
- Double-click the add-in (ESRIUK.Services.ArcGIS.esriAddIn) to install it
- (Optional step) Login to the online services with your account details selecting Services > Account (Register for an account here: https://www.esriuk.com/products/data/online/register)
- Select Services > Configure Locators
- Select Add
- Select Local Server entering the URL of your service (e.g. http://
- Select Connect (having entered any Authentication information as required)
- Select the required Locator from the dropdown list
- Click OK
- You should now be able to use the Locator service from the Esri UK Online Services toolbar
I was recently writing a geoprocessing model to calculate the density of a point feature class from which all the areas above a specific threshold could be selected. I had been thinking about writing this for a while and had in mind the process I would use, but I soon discovered that I didn’t understand how the Density tools in Spatial Analyst work and would need to find an alternative.
The Density toolset in the Esri Spatial Analyst extension contains three tools: Kernel Density, Line Density and Point Density. I had thought that I would run the Point Density tool and then use the Raster Calculator or even the Contour tool (which would have taken me straight to vector format) to select out the areas above my threshold.
But I hadn’t taken into account the method by which the Point (and Line) Density tool calculates the output cell values. The ArcGIS Desktop 10 help says:
By calculating density, you are in a sense spreading the values (of the input) out over a surface. The magnitude at each sample location (line or point) is distributed throughout the study area, and a density value is calculated for each cell in the output raster.
It was the last part that I hadn’t thought about: a ‘density value’ is calculated for each cell in the output raster. What unit would the density value be in?
Let me give an example. You are analysing population density and want to identify all the areas where the density is greater than 500 people per square kilometre. You open the Point Density tool, choose the neighbourhood, and set the units to square kilometres.
You’ve set the units to square kilometres so the values of the cells in the output raster are ‘number of people per square kilometre’. Right? Well, sort of.
The Point Density tool totals the number of points that fall within a neighbourhood, applies your population weighting if you have chosen one, and then divides this total by the area of the neighbourhood. It then applies a scaling factor according to the area units you selected.
An example is a farm house with a population of 4 and no other houses nearby. The Point Density tool will total the number of points within the neighbourhood (1 farm house), weight it by the population field (4 people) and divide it by the area of the neighbourhood (in the example above a 250m circle, or 196,349.5m2). As the units were set to square kilometres the resulting figure (0.00002037) is multiplied by 1,000,000 (the number of square metres in a square kilometre) giving a cell value of 20.37. But what does that mean?
My head says that logically there are 4 people living in the area, but my density raster gives me a value of 20.37. Now apply this to a city, or a country, and how do I now select out areas above my threshold of 500 people per square kilometre? This was especially confusing as I wasn’t modelling population, I was modelling energy use. I wanted to identify areas where the demand for energy was high. The output units were simply not what I was expecting.
So I went back to the drawing board, or in this case the Desktop online help. I eventually came across the Neighbourhood toolset, containing six tools which I had never used before: Block Statistics, Filter, Focal Flow, Focal Statistics, Line Statistics and Point Statistics.
It was the last one that caught my eye. The help says that the Point Statistics tool calculates statistics on point features that fall in the neighbourhood around each output raster cell. The statistics available include mean, majority, maximum, minimum, standard deviation and most importantly for me, sum. What if I summed the energy demand in a neighbourhood? If I know the area of the neighbourhood is one square kilometre, then I know the output cell values are ‘energy demand per square kilometre’.
I soon realised that this was what I wanted. I was using the energy data to find locations where local demand was high enough to support a Combined Heat and Power (CHP) plant. CHP plants create electricity from fuel and circulate the heat produced through a network of pipes to provide hot water for radiators and taps. To make the most of this efficient process there has to be sufficient local demand for hot water, preferably as close as possible to the source.
From here on in it was easy. I decided to use the Reclassify tool to classify areas above the energy demand threshold as 1 and areas below as NoData, and then the Raster to Polygon tool to convert the areas to vector. This gave me polygons within which the density (or sum!) of energy demand met my threshold and would therefore support a CHP plant.
So, in conclusion, if you are doing density analysis, take a look at the Neighbourhood toolset to see if it could help you. Although not advertised in the Desktop help using the word density, I think it has useful parallels.
Before you start out, think carefully about what it is you want to do. I had thought that I was doing traditional density analysis, but knowing that ultimately I wanted to know the sum of something within a defined area might of helped me get there quicker.
Finally, don’t underestimate the ArcGIS help. After 8 years of specialising in Desktop I still use the help most weeks and always learn something new.
One of the challenges I often face is trying to show how clusters of events (e.g. crime locations) have changed over time. ArcGIS through Spatial Analyst and/or Crime Analyst allows you to create great hot-spot maps for a period in time, but how do you create animated hot-spot maps? Here are a couple of methods I’ve come up with:
1. Using Animated Group layers
ArcMap allows users to create animations by cycling through a series of layers within one group layer one at a time either in the order they are listed or in reverse order. This is achieved by creating a hotspot for each of the time periods desired (in my example one per month) and grouping them together as a group layer.
Using the Animation toolbar, it is then possible to Create a Group Animation which will cycle through all the layers in the group one at a time. Fading transitions and blending can also be set to improve the visual effect of the animation.
NOTE: If you are animating semi-transparent layers over a background map which is also set to be semi-transparent you may experience a flashing effect. This can be resolved by setting the background layer to be completely opaque.
There are many benefits to this approach of animating time data, but here are a couple I have noted:
A. It doesn’t matter what time periods we are using, so long as layers have been created. You can easily animate years, months, times, weeks etc. In just the same way.
B. You can use this method to animate through any number of layers which in turn may be grouped. For example, I have created a Group Layer which contains a sub group for each month comprising a hotspot and mean centre point location.
Relevant sections of help file: Creating Group Layer Animations
2. Using Mosaic Datasets and Time Awareness (ArcGIS 10 only)
One of the downsides to the first approach is that it doesn’t leverage the new Time-Awareness capability released at ArcGIS v 10. This makes it very easy to bring vector data with a time or date to life by playing back through time using the time slider tool. But how can you achieve the same result with raster data. I have created a number of demonstrations recently where I have created hot-spots for the same area but based on data for different time periods. Here’s how:
i. Create a raster for each time period – if your data is suitably formatted, this is a simple process to automate using an iterative model. (e.g. for each unique month, select all the data, create a hot-spot and export as a new raster using the month name as the file name).
ii. Create a new Mosaic Dataset (new at v10) and load all these rasters into it – again this could be part of the model.
iii. When you look at the Mosaic Dataset, one part of it is the raster footprints. If you open the attribute table for the footprints, you will see that you have one record per raster that you’ve added.
iv. Now add an additional ‘date’ field to this table and populate with a date. For month based data, I set this to the 1st of that month.
v. Once this is set up, you can make the Mosaic dataset ‘Time Aware’ and use the Time Slider to animate
There are many benefits to this approach of animating time data, but here are a couple I have noted:
A. Using this method, you can also publish as a time-aware service and play back through a web application, BUT this requires Image Server extension
B. By having all the rasters in the same layer, you only need to set the symbology once and you can then ensure the classification settings are consistent across all rasters. In other words, you will only get a ‘really’ hot spot for the month when the values were particularly high.
Relevant sections of help file: What is a Mosaic Dataset
Prior to ArcGIS 10, we had a nice little free application called the Cartographic Text Renderer which essentially labelled the Ordnance Survey MasterMap Annotation layer. Whilst this is a great little tool for pre- ArcGIS 10 installations this doesn't work correctly at 10. Also the new basemap layer renderer are incompatible with the labelling methods used in the tool.
(Basemap layers were new at ArcGIS 10 which are a group of layers that draw continuously during navigation and significantly improve the display speed and responsiveness of the map. If you’re not using them already then they are definitely worth investigating. Further information about them can be found in the following blog post. )
So, if you want to be able to use OS MasterMap annotation in a basemap layer, then you could use the following method in ArcMap:
- Enable labels on the OSMMANNO layer
- Goto “Placement Properties” under “Other Options”
- If using the Standard Label Engine ensure that “On the line” is checked. If using Maplex then you will probably want to use Street Placement and make further modifications/tweaks to your labels as you see fit.
- Add the following as a label expression: "<FNT name='Arial' size='" & [TEXTHEIGHT]*5& "'>" & [TEXT] & "</FNT>" (including the quotes)
- Of course at this point you can change the way the label looks!… for example, I also quite like "<FNT name='Gill San MT' size= '" & [TEXTHEIGHT]*4& "'>" & [TEXT] & "</FNT>"
- Optional: Click on the “Symbol” button in the “Text Symbol” area
- Optional: Click on “Edit Symbol”, and click on the “Mask” tab
- Optional: Add a Halo of 0.5 to the layer
- Click OK/Apply to all windows
At this point you could use the newly labeled layer in the Basemap Layer. To do this you just need to drag the layer into the basemap layer. However I’ve found that converting it into Annotation performs better. So to do this you need to complete the following:
- Zoom to 1:800
- Right click on OSMMANNO and click “Convert Labels to Annotation”
- Choose the “In a Database” option for “Store Annotation”
- Choose the “All Features” option for “Create Annotation For”
- Un-tick the “Feature Linked” box
- Click on the file symbol to set the output file location and name
- Click “Convert” (performance will depend on amount of data)
The annotation layer created using this process won’t look exactly the same as the results achieved using the Esri UK tool but it will be near enough and also perform well in the Basemap layer. And here is the result:
If you’re still using ArcGIS 9.3.1 the Cartographic Text Renderer is a great way to label the OS MasterMap Topo Layer. The Cartographic Text renderer can be downloaded from MyESRI UK.
One of our readers has pointed out that the original post missed a special case to the workflow. If the label field values include the ampersand character (&) then the label expression will need include a function with text substitution, to change "&" to the tag "&". To do this check the advanced box in the label expression dialog and paste in the function below:
Function FindLabel ( [TEXTHEIGHT], [TEXT] )
if instr( [Text] , "&") > 0 then
[Text] = replace([Text], "&", "&")
FindLabel = "<FNT name='Gill San MT' size= '" & [TEXTHEIGHT]*4& "'>" & [Text] & "</FNT>"
Verify the expression, remember the field names [TEXTHEIGHT] and [TEXT] need to match the field names in your feature class. The labels that include the & characters should now draw correctly. A similar approach would work with other special characters.