Best Recent Content
I feel your pain! I model Dry Stack Facilities and Tailings Dams and have to perform a stage storage analysis quite often. These days I use mining software but I used to do this in Global Mapper. In a nutshell you have to run the volume analysis twice against a common 'comparison surface'. So in essence you are computing the volume between the terrain and the plane; and then the dump upper surface and the plane. You then subtract one from the other.
I take no credit for this - please see this site for the general principle: https://ceethreedee.com/?s=stage+storage
I have attached my notes from back in 2016 for your reference. Please see the Excel spreadsheet I have embedded in the Attachments panel on the left. Right-click on the attachment and hit open or save. If I recall you just paste the values into the green cells and it will do the rest for you.
It looks like you are using Global Mapper 22.0.0. The parameters flagged as missing are new, and were added to the 22.0.1 release. Please download the latest release of Global Mapper to resolve this issue.
Thank you for a prompt reply, Jasmine. I've spent today experimenting with simple tasks and watching the Webinars (just for the record--the ratio of useful information to time consumed watching them is rather low; a well-written, comprehensive manual would be much more valuable than the Webinars).
As best I can tell, when I place features that I have "moved" (for instance, by subsetting based on attribute values) to a new layer, those features are duplicated in their entirety WITHIN the GM workspace. So I may, if I wish, close the source layer and I will still retain all of the attributes of the moved features, but all the feature information will be stored in the workspace (and will display in an ASCII editor as gobbledygook). If the features I am using were originally loaded from a file (e.g., a vector shapefile) and I have moved all of them into new layers, the layer created by loading the shapefile (say) will no longer have any undeleted features. If I close that layer (which seems like a reasonable thing to do, since all of the information in it is now redundant and, further, is marked as deleted), it is my understanding that the source file reference is then lost forever from the GM workspace (so that, if several years on, I find myself wondering about the provenance of the features now displayed on the map but visible in the GM workspace only as gobbledbook, well, I'm just out of luck).
I infer that cropping features works much the same way as segregating them into subsets--that the provenance of the cropped features will be lost if I close the layer containing the filename for their source. As a realistic example, if I want to know where all of the MODIS-reported wildfire locations for 2013 in the Cascade Mountains are, I first load the source file, which covers the entire United States. I can then crop the point features to an area feature covering the Cascade range, but I must either continue to keep the layer with the full file in the workspace (which means that every time I load the workspace, a hundred thousand points will load--time-consuming) or I will lose any record of the provenance of those features, unless I remember to manually enter a reminder into the Description of the layer containing the cropped subset.
Beyond this, there is the matter of efficiency. Again, if I understand aright, if I put a bunch of features into a layer, all of the information concerning that layer is stored within the workspace. If I want to keep all of that layer information outside of Global Mapper (either to use in another application or to reuse with Global Mapper), I can select that one layer, hiding all others, remember to ensure that all other vector data (such as graticule and grid) are invisible, and export it as vector data to a (shape)file, after which I would presumably close that layer since its data are now saved. The GM workspace will presumably be much smaller after I close the layer and save. Question: which procedure will be faster for loading and working: (1) keeping my vector data in external (shape)files which are loaded each time the (small) workspace is loaded, or (2) keeping the vector data internally in the (very large) workspace?
If vector data are maintained in external files, is there a difference in efficiency among formats? For example, if all of my vector data are points (e.g., wildfire locations) or simple tracks (roads, trails) lacking topological relationships, would it be more efficient to save them as .gpx or .kml than as shapefiles?
A similar question applies to raster data. It is my impression, for example, that GM loads and handles geoTIFFS much faster than geoPDFs, so I have made it a habit to use GM's batch convert to convert all geoPDFs I encounter to geoTIFFs before working with them. Are there raster formats that would be still more efficient?
Thanks to you (and anyone else who can help).
Other Global Mapper users may have different recommendations, advise and/or preferences. A lot of workspace management relates to the specific project, the breadth of data used and how much or little the data has been editied or customized....if the project has an end point or is it a continuous work in progress.
(1) I have an old workspace with some layers containing cropped vector features from source files; although the features display correctly, I can no longer determine the original source of the data. I imagine that once upon a time I loaded one or more files, cropped the data to the boundary of my map, and (probably) then closed the original huge source file. GM does not seem to save the source file name anywhere I can find it.
Try selecting the layer in the Overlay Control Center and then clicking the Metadata button, this will give you the location of the data on your machine (locally) and, depending on the data type, and where you got it from, there may be information on data collection and origin. Please note that, going forward, you can also create metadata for your projects - if you would like ot store notes or information on data layers that you do not necessarily want visible on the map.
What actually happens when features are "moved" from one layer to a new one? Is doing this inefficient? A similar issue arises when I create new layers to hold subsets of features segregated by attribute values.
This depends a little bit on how the data is moved between layers, generally data are copied - edited in some fashion - and then added to either a new layer or to another existing layer. The feature that was moved may look redundant in the map - in which case it is still present in the original layer.
(For example, a "transportation" source file may include, inter alia, roads and trails as line features; I often want to segregate roads from trails, and additionally to crop both subsets to a map boundary.) Should each new layer (say, containing either a subset of features that have been moved or cropped versions of features in a source layer)
be immediately "exported" to a file that will then be loaded, to serve as the source for the features that have been cropped or segregated by attribute value?
You may want to work with two workspaces - one loaded with all the data you have collected pertinent to your project, and then another containing the data after you have tidied it up and seperated out the features and areas you are interested in working with.
This may include moving features to new separately layers, respectively, so that they may be activated and deactivated independent of one another. From your general 'Transportation' layer you may export roads into one layer, and trails into another after you have cropped them to the area of interest. You can then create additional layers by attribute values, or apply styles within
the existing layer by attribute value if you are just looking for visualization.
(2) Is it inefficient to load a large file, then subset and/or crop its features as described in (1), compared with actually exporting a file that contains just the desired features, and loading it instead of the large source file? (The problem with the latter procedure is the difficulty of maintaining configuration control of a proliferation of special-purpose files, each of which contains a specific subset of the features in the original source data file.) If the recommended solution (for (1) and (2)) IS to export a file containing just the features I need from the larger source, is there any guidance on the preferred format (.shp?) for this file, given that I would like to avoid losing attribute information and would like the load to be as efficient as possible?
This is a matter of preference to some extent. I think for many users having a workspace of data that has been prepped according to the specific project needs is helpful. Since one of the most fundamental methods for customizing and organizing data is to export according to an attribute query, or some sort spatial delineation or extent - it also makes it easier to import the prepped data into a fresh work space rather than the existing one for visualization and organization purposes.
The shp file format is a good format to use for retaining attribute structure. For some tips and helpful hints on managing workspaces and vector data, I recommend checking out the following Global Mapper Webinars specifically:
Attribute Management in Global Mapper: Blue Marble Geographics Previously Held Webinars
Workflow Optimization: Blue Marble Geographics Previously Held Webinars
Tips & Tricks: Blue Marble Geographics Previously Held Webinars
Hello there,Here are some features that could be improved or implemented :- Ablilty to simply import and export N(name),X,Y,Z text files. Currently we can only export in CSV with the name but this field is not placed before de coordinates of each points.- Ability to subsample lidar points based on a distance between points without creating new points (the 3D thining of GM creates new ones). The points left are exactly the same as in the source point cloud (similar to the "Subsample" tool of CloudCompare)- Improve the rounding options of the buffer. Often, the buffers are not well rounded because they are made of few points. I generally have to create rounded buffer on QGIS, which rounds much better with more buffering parameters, before importing my file back to GM. See the difference below :- When editing the length of a line, the bearing changes so the line is not exactly the same after. See below :- Snapping does not always work as desired, even when playing with the snapping options.- When selecting polygons, do not count the islands inside these polygons as the number of selected features (very confusing), or at least count them and display their number separately.- Give the ability not to select islands. It is very confusing when working with concentric rings because if you click at the center, you don't now if you are selecting the inner ring or the island of the outer ring.- Give the ability to reorder the features inside a layer (eg: "send to back" or "bring to front").- Smooth the rendering of displayed high resolution raster imagery. When zoomed out, the image looks aliased (no problem when zoomed in though). See below :Anyway, GM is awesome. Keep it up !