I understand what you mean now and why this create issue with sorting.
Some idea I can see:
A) Create a plugins. But honestly, this is complex. Output an object for server-side/client-side
B) Create Two API Call. One will get data as text (what you actually get). Action will transform the payload to fit the one you want and output a json file (using a plugin like : Generate and Download Files Plugin Bubble)
The second API Call will do a GET request to the created file url
C) Use another tool like Integromat. You have one API Connector that call Integromat endpoint, Integromat call the API you want, transform the JSON payload and return it at the end
A specific example is that I extracted data from a recent planet.osm PostGIS dataset for an area in Missouri. The OSM data needs a lot of cleaning before it can be rendered using the correct styles. Many water bodies are stored as line strings that don't close properly, so I have to use FME for snapping and then polygon building so that I can have blue filled rivers / lakes.
I'm having trouble identifying all the cases where snapping is required (e.g. which 'Natural' types require it and what the tolerance should be). Also I suspect there are many other data issues that I will never see as I am dealing with all of North America.
A planet.osm file is downloaded and loaded into PostGIS, using Osmosis, into the pgsql schema. I then extract OSM xml from PostGIS for lots of small areas, again using Osmosis. Each of these small xml files is then converted into Shapefiles using FME and its broad feature categories. It is this stage (OSM xml -> Shp via FME) that I am expecting to convert lines into polygons and perform other cleanup on the data.
There are two main ways to consume OSM data - by using osm2pgsql, an older utility that supports 'stylesheets' and differential updates, and Imposm, a newer, Python-based system that supports Python-based stylesheet transforms. When people do processing, a lot of it is in that kind of script. For instance, here's an imposm mapping for osm-bright, the stylesheet upon which MapBox Streets (disclosure/employee) is based.
To be more specific to what you're encountering, it's likely that you aren't properly processing osm relations properly, which, in the data model are what allow multiple linestrings to form polygons. Tools like Imposm and osm2pgsql generally handle this kind of data transformation for you.
As far as how OSM.org itself does things: edits are in a 'semantic' Postgres database, and continuously imported into a PostGIS database with osmosis, and rendered with Mapnik. There's no manual cleanup step between the database and map rendering, since the two are highly coupled and the map aims to be up-to-date.
In general you don't need "snapping" as such, since the original OSM data is topologically organized - a polygon (= OSM way), for example, is defined through a list of node indices (and not directly by their coordinates) - so if the starting and the ending indices are the same, that's considered a closed polygon. Otherwise, it's a polyline (like a road).
Yes, there are other data issues, too. Mainly they stem from the very nature of OSM mapping: different people map things differently and there are no set-in-stone rules on how to do it. It's more or less a self-organized anarchy ;)
I myself never work with flattened OSM data produced by osm2pgsql - I always start with original topological data in OSM XML form and write code to process that into the form I need. But then again I don't use Mapnik for rendering, so I'm probably in the minority.
If you use the original database scheme from osm2pgsql, you have the related osm data models 'closed way' and 'multipolygon-relation' transformed into polygons and put in a table called 'planet_polygon'. Ways and nodes are in 'planet_line' and 'planet_point'. You can access these tables via Quantum GIS, and export them directly to shapefiles. You can also do SQL queries from inside Quantum GIS to filter the data.
I would not use osmosis for that. It does not have the polygon handling as osm2pgsql does.Osmosis stores the data in the same way the contributers deal with them (Nodes, ways and relations). It is not a suitable database scheme for rendering.
You'll note that the data in the File field looks gibberish. This is because the test tool is actually sending the file as raw data (the web browser handles that part), and Fiddler attempts to encode it as ASCII. But the test itself succeeds.
I'm first reading the edoc using a script and putting its data into a MemoryStream. Then I convert that stream into an array and concatenate the contents into one string, which I store in a token called %(Binary), which has a token tag of "File".
You'll note that Workflow is sending a string representation of the data, rather than the raw data like the test tool. It even automatically sets Content-Transfer-Encoding to binary, but doesn't convert the string to binary (like it should, since the token has a "File" tag and Field type is set to "File").
Interestingly enough, Workflow always seems to insert the Content-Transfer-Encoding header into the form-data object, even if you leave it blank in the configuration. But I don't think that's the issue - it works fine if I add it to my own function.
Hi
This may not necessarily be a data processing issue but i would like to have some suggestions.
I am currently using cryosparc on a standalone workstation and i have a 12TB hard drive as my work directory. The movies are backed up on a separate server. The movies transferred to and imported from this 12 TB drive. Once i have my particles extracted, can i delete the original movies? That will clear up a lot of storage space but will it hider the data processing?
I plan to install a couple of Pi boxes with the software soon. I am willing to try to archive the data in some storage (yet to be defined). I have seen some attempts that do modify (fa)dump1090 to access MySQL for example. My preference would be to be non intrusive and avoid any modification of the software. My thought was therefore to probably connect the 2 boxes using nc as explained on the github homepage and to connect my external component to store the data through port 30002. I have therefore a couple of questions:
1- Is it a good idea to go with the RAW data ? If yes how to decode it back to the physical data ?
2- Anyone aware of something already existing ?
3- Is there any easy way to encrypt the data (between the box) other than tunnelling using ssh ?
MyPyramid Food Data provides information on the total calories; calories from solid fats, added sugars, and alcohol (extras); MyPyramid food group and subgroup amounts; and saturated fat content of over 1,000 commonly eaten foods with corresponding commonly used portion amounts. This information is key to help consumers meet the recommendations of the Dietary Guidelines for Americans and manage their weight by understanding how many calories are consumed from "extras." CNPP has created an interactive tool from this data set available on the web at MyFood-a-pedia.gov. A mobile version is coming soon to provide consumers with assistance on-the-go.
By design DirectShow topology needs to be complete, starting with source (microphone) and terminating with renderer filter, and data exchange in DirectShow pipelines is private to connected filters, without data exposure to controlling application.
The "intended", "DirectShow way" is to develop a custom renderer filter which would connect to the microphone filter and receive its data. More often than not developers prefer to not take this path since developing a custom filter is a sort of complicated.
The popular solution is to build a pipeline Microphone --> Sample Grabber --> Null Renderer. Sample Grabber is a filter which exposes data, which is passed through, using SampleCB callback. Even though it's getting harder with time, you can still find tons of code which do the job. Most developers prefer this path: to build pipeline using ready to use blocks and forget about DirectShow API.
I do this quite often in ADE-XL using both corners and Monte-Carlo analyses with an ocean script. I retrieve the data (as well as some pertinent variables such as corner simulated, temperature, and supply voltage), post-process it and write the results to a file. I often use a while() loop to run through the simulation directory. If it is useful at all, I've included an example. The results were stored in the directory "Interactive.17". You can navigate to your results and substitute the appropriate paths and number of simulations (sim_num_max). Of course, for Monte-carlo, the directory is not going to be named "Interactive.XX". I hope I understood your question!
You are right, I did not understand your question as it was not clear to me you were looking only for the results of its measurements - not the raw data. Dumb question on my behalf, but is there a reason you cannot just take the data from the ADE-XL results data as a comma-separated file? You can access the data directly without having to access the results. If you are using IC6.1.6.500.9 or later, you can use the procedure described in Solution ID 20309689.
I often just run an ocean script, pull out the data and recompute it to get the statistics. I find this easier than creating a new expression and re-evaluating the data from a run. This assumes, of course, that you have checked the option to save the data for all the runs!
by this comment : "Dumb question on my behalf, but is there a reason you cannot just take the data from the ADE-XL results data as a comma-separated file?" I get the impression that you think I am kicking off the sim with ADE-XL. I am not. I am using Ocean Scripts, generated from ADE-XL. There is a difference. The scripts only report the mean, and sigmas on a performance metric that I tell the scripts to measure.
There are some files that get generated for legacy reasons called mcdata and mcparam. In current versions, if you have a run directory called Ocean.6 (for example) then these will be under psf//monteCarlo and summarize the entire results for that test.
760c119bf3