When I calculated bearings between points using the bearing function in the geosphere package, the resulting bearings spanned -180 - 180 degrees. However, based on the geosphere package documentation, I expected the bearings to span 0-360 degrees. Here's a quote from the documentation:
Thus, the directions that the function geosphere::bearing() deliver are azimuths (angles). Obviously, these are expressed in degrees -180 ... 180. That is the relative direction you would turn when facing North. This is important as it assumes you are looking to the North and thus your destination (or point p2) could be reached with an initial left/right turn, i.e. negative azimuths reflect a left turn, positive azimuths showing your destination on your right-hand side.
When you speak about a standardised initial course direction (North := 000 or 360 degrees), your reference is not which way you look, but which course direction you follow.For example, ships or aircraft fly a specific course while they have to correct for wind offsets. I do not go into more details here on the difference between course and heading (the direction the nose of the ship or aircraft points).However, in order to determine the course, a left turn (negative azimuth) needs to be subtracted from the 'North direction 360', while positive azimuths are added to the (North direction interpreted as 0).
This package implements functions that compute various aspects of distance, direction, area, etc. for geographic (geodetic) coordinates. Some of the functions are based on an ellipsoid (spheroid) model of the world, other functions use a (simpler, but less accuarate) spherical model. Functions using an ellipsoid can be recognized by having arguments to specify the ellipsoid's radius and flattening (a and f). By setting the value for f to zero, the ellipsoid becomes a sphere.
There are also functions to compute intersections of of rhumb lines. There are also functions to compute the distance between points and polylines, and to characterize spherical polygons; for random sampling on a sphere, and to compute daylength. See the vignette vignette('geosphere') for examples.
I have been tasked with calculating the distance of two locations for several individuals. As seen in the image, I have a subset of 5 "IDs" that contain a starting "lat" and "long" and a final "lat" and "long". Row 1 & 2 belong to one individual, 3 & 4 the next, so on and so forth.
I have seen people say that using the geosphere package works, however I have a large dataset and have not found a way to apply that package to my data. I began calculating some of them by typing them out, however it would be great to learn of a way to extract specific rows and columns automatically. Perhaps I am missing a detail of this method, any suggestions/help would be much appreciated!
I have not used the geoshpere package, but looking at the documentation of the distGeo() function, something like the following should work. The code makes separate matrices of the starting and ending positions by selecting odd and even rows. The last step is marked as a comment because I don't have the package to test it.
This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.
If you have a query related to it or one of the replies, start a new topic and refer back with a link.
My question is, how do I calculate the destination point for each long/lat origin coordinate, given the distance and bearing? I have used the destPoint in the Geosphere package, but I cannot find the equivalent in sf.
That should install an up to date version. If that doesn't work, it may be one of the dependent packages that is not up to date. In this case, the first thing to try is to figure out which package this is. I did this by trying to install the package using the R tool from the developer tab. In your case something like this should work - you just need any data stream to go in.
After opening this application I'm not really sure how to use it. The only section that is clear is typing out the packages and using a comma between each package. Everything else is pretty unclear on what needs to be done. For example what is "Install the package(s) to your default personal directory"? Another example, what's "Select the folder where you would like the package(s) installed (you must have write permissions)"?
Each example within this tutorial introduces a variety of R packages to visualize maps in R. Below we provide short descriptions of the R packages used in each example throughout this tutorial, which can to be installed with the following code:
Redefine the migration data set to only include columns 1 & 6-through-56 of data. Then use melt function from reshape package to transform data set into rows representing unique instances of data, based on a selected variable id (in our case, the from_state variable). For more on the melt function, see -bloggers.com/melt/.
A significant Mantel test will tell you that the distances between samples in one matrix are correlated with the distances between samples in the other matrix. Therefore, as the distance between samples increases with respect to one matrix, the distances between the same samples also increases in the other matrix.
If you are interested in using physical distance between samples as a matrix for the Mantel test. The package geosphere contains a function for calculating Haversine distances (distance between two points on a sphere) given latitude and longitude.
The first column is sample name, the next 4 columns contain environmental parameters for each sample (i.e. Salinity, Temperature, etc). Then, the following 2 columns contain the latitude and longitude for each sample, and the remaining columns contain the 200+ OTU abundances that correspond to each sample.
From the results, I can see that the temperature distance matrix has a strong relationship with the species Bray-Curtis dissimiliarity matrix (Mantel statistic R: 0.667, p value = 1e-04). In other words, as samples become more dissimilar in terms of temperature, they also become more dissimilar in terms of microbial community composition.
In this case we need to scale the environmental data prior to creating a distance matrix. This is because the environmental variables were all measured using different metrics that are not comparable to each other.
Stating the statistical values from the Mantel test is a sufficient way to report the results of these tests. I also think that plotting the correlation as a pairwise scatter plot can be an intuitive way to show these rather complex relationships. Check out this tutorial to see how to make scatter plots in R.
airportr is a lightweight package to help deal with a few common airport related tasks. This package bundles open license airport data from OurFlights with several utility functions and does not require any API calls or dependencies beyond dplyr.
There are four simple lookup functions that work by taking some kind of input such as an airport name, an airport IATA/IACO code, or city name and returns structured and consistent data. This can be as simple as finding out what airport YYJ is:
The lookup functions are designed to be robust to any of the three standard inputs, whether it is an IATA code, an IACO code, or the full name of an airport, though specific input and output types can be added as function parameters. IATA and IACO codes are more robust and easier to use as names need to match exactly and there may be similar named airports in multiple countries. IACO codes in particular are more complete than IATA codes which do not include all smaller and domestic airports. Lookups by airport name are designed to return potential similarly named matches if there is no exact match, alongside a warning.
Cities will often have multiple airports serving them. This is especially common for larger cities. Typically when working with airport origin/destination data, an analyst might need to identify what cities those airports actually serve. The city_airports() function helps with this.
Sometimes a city lookup is insufficient. Baltimore International Airport (BWI) serves Baltimore, but is typically grouped with other DC-area airports like DCA and IAD as a set of airports serving a particular metro area. We can lookup airports that fall within a specified distance of one another using the airports_near() function which takes an airport name or code as an argument alongside a specified distance radius in kilometres.
And sometimes all you have is a pair of coordinates. The airports_around() function takes a pair of lat/lon coordinates in decimal degrees as arguments and returns all airports that fall within a given radius.
When working with origin/destination data sometimes you need to calculate the distance between to airports. airport_distance() calculates the distance between any two pairs of three-letter IATA codes. Distances are calculated using the Haversine Formula:
This data is not suitable for navigation. OpenFlights does not assume any responsibility whatsoever for its accuracy, and consequently assumes no liability whatsoever for results obtained or loss or damage incurred as a result of application of the data. OpenFlights expressly disclaims all warranties, expressed or implied, including but not limited to implied warranties of merchantability and fitness for any particular purpose.
This was a fun little project to take on to comprehensively address a few different common tasks I face at work. I hope that this lightweight package can be useful to others who work with similar data, and I encourage anyone with suggestions for how this can made to be more useful still opens up an issue or PR on Github or sends me an email.
leaflet is an open-source JavaScript library that is used to create dynamic online maps. The identically named R package makes it possible to create these kinds of maps in R as well. The syntax is identical to the mapdeck syntax. First the function leaflet() is called, followed by different layers with add*(). Again, the pipe operator %>% is used to add layers on top of each other.
c80f0f1006