Online privacy policies are the primary mechanism for in- forming users about data practices of online services. In practice, users ignore privacy policies as policies are long and complex to read. Since users do not read privacy policies, their expectations regarding data practices of online services may not match a service's actual data practices. Mismatches may result in users exposing themselves to unanticipated privacy risks such as unknowingly sharing personal information with online services. One approach for mitigating privacy risks is to provide simplified privacy notices, in addition to privacy policies, that highlight unexpected data practices. However, identifying mismatches between user expectations and services' practices is challenging. We propose and validate a practical approach for studying Web users' privacy expectations and identifying mismatches with practices stated in privacy policies. We conducted a user study with 240 participants and 16 websites, and identified mismatches in collection, sharing and deletion data practices. We discuss the implications of our results for the design of usable privacy notices, service providers, as well as public policy.
USENIX is committed to Open Access to the research presented at our events. Papers and proceedings are freely available to everyone once the event begins. Any video, audio, and/or slides that are posted after the event are also free and open to everyone. Support USENIX and our commitment to Open Access.
I've created a walk-time map in ArcGIS Online, based on library locations within a city, using 5-minute increments between 0-5 mins and 25-30 mins. The attribute table shows these are stored as whole minutes in two columns (for each end of the range) e.g. 25.00 and 30.00 - this is what I expect.
The tool runs, but the output has values that are all less than 0 (range 0.06 to 0.68). Given that the lowest possible travel time value is 5, and the census areas are totally covered by travel time areas, how can this be?
There are also unexpected patterns in the data, where a census area that contains a library (has low walking time) shows with a (relatively) high value. See the lower-central library in the maps below.
The population value of the blue polygon is 4500, but only 3375 is used since the blue polygon only overlaps partially. In your case there is an additional reason for the values to be so small. Your drive time areas are dissolved. Look at the polygon indicated with the black outline from the example that you provided:
Clearly I was working on this too late in the day. With fresh eyes it is clear that the operation would not work as I expected - I looked at the help a few times but evidently the relevant part passed me by.
This looks much better. I did a double take at the legend, thinking the high values are too low, but the dark purple areas go up to values of 22, which is logical - they are just all lumped in to ">18". (It could be helpful if the legend indicated what the max value is.)
Thanks again! I can't express how helpful this response was. I hope my workaround is helpful to others in future. Since the calculations needed are actually pretty straightforward, it would be great if this could be incorporated into the tool, e.g. via a parameter allowing you to state if the data to be summarised are counts or not.
I found that the Summarize Within tool does not provide an area weighted mean within the target polygons. It doesn't do the 'within' part. If your 'input summary feature' polygons extend outside of the target 'input polygon', that is they intersect it but are not clipped, identitied, or unioned to it's boundaries, the tool with average in values from areas outside/greater than the target 'input polygon'. This will throw off the area weighted mean of values with the polygon.
The work around is to do union (as mentioned above) or identity between the 'input polygons' and the 'input summary features' before running the tool so there are no external areas intersecting the polygons you want the means from.
It seems like most users don't expect areas outside of a polygon to be included in the mean generated from a 'summarize within' tool. Perhaps this tool could be called 'Summarize Intersecting' tool. And after adding the identity/union step prior to analysis, the 'Summarize Within' tool could be updated to perform as expected.
As seen in Xander's explanation above, the documentation for Summarize Within shows that the values that are being summarized are calculated based on the proportion of the 'input summary feature' that falls within the 'summary polygon'.
So even though the population of the entire yellow region is 3200, we only want to summarize the amount that falls within Neighbourhood 1, so we multiply the ratio of the yellow region that falls within the bounding polygon by the population value (4/6 * 3200 = 2133) .
This of course assumes that the values can be divided, which is why we recommend using fields with counts and amounts rather than rates or ratios.
When we calculate the mean and standard deviation, we calculate an area-weighted mean and standard deviation of the five regions within Neighbourhood 1. Using area-weighting means that the regions that have more overlap will contribute more towards the mean.
To do this, we take the proportional values calculated above, and then weight them by the amount of the bounding polygon that they occupy. So, calculating the weighted mean for Neighbourhood 1 (which has 17 sq miles total area) would be done with this equation:
Therefore, the portion of the yellow region which falls outside Neighbourhood 1 doesn't contribute to the mean calculation, except in terms of determining the proportional value that will be used in summary statistic calculations (i.e. the calculation that led us to have a value of 2133 for yellow, instead of 3200).
@EdCarter is that proportional value calculation what you mean when you say "the areas outside of a polygon are included in the mean"?
Can you provide an example dataset/workflow that illustrates what you mean when you say "the tool [will] average in values from areas outside/greater than the target 'input polygon'"?
What would help to make the documentation more clear?
The dramatic expansion of internet communication tools has led to the increased use of temporary online groups to solve problems, provide services, or produce new knowledge. However, many of these groups need help to collaborate effectively. The rapid development of new tools and collaboration forms requires ongoing experimentation to develop and test new ways to support this novel form of teamwork. Building on research demonstrating the use of nudges to shape behavior, we report the results of an experiment to nudge teamwork in 168 temporary online groups randomly assigned to one of four different nudge treatments. Each nudge was designed to spur one of three targeted collaborative processes (collaborator skill use, effective task strategy, and the level of collective effort) demonstrated to enhance collective intelligence in extant research. Our results support the basic notion that digitally nudging collaborative processes can improve collective intelligence. However, to our surprise, a couple of nudges had unintended negative effects and ultimately decreased collective intelligence. We discuss our results using structured speculation to systematically consider the conditions under which we would or would not expect the same patterns to materialize in order to clearly articulate directions for future research.
Thank you to The Network Group for inviting me to speak today. I am Nausicaa Delfas, Executive Director at the FCA, and currently acting COO. Having for a long time been focused purely on the regulatory, supervision side, I am finding I am stepping more into your shoes, with now having oversight of internal information security and cyber resilience.
Firstly, I want to look at the threat landscape from our perspective. The FCA operates in a unique position in the Financial Services spectrum; we have visibility of over 56,000 firms and we are well positioned to observe the myriad of threats and issues that these firms experience on a daily basis in the cyber world.
Secondly, what can we do about those threats? How should we manage these risks? There are strategies that range from patching and information risk management, to people strategies, to security cultures, to information sharing, to what we can do to collectively improve our understanding of the threats and best practices to mitigate against them.
We have witnessed some interesting changes over the last 12 months, with the re-emergence of some old foes (such as ransomware) and the development of some innovative and dangerous criminal networks.
Attacks exceeding 1.5 Tb per second are now entirely feasible and the scale is expected to grow. We are seeing SMART televisions, fridges, routers and cameras being exploited to form botnets (a network of private computers infected with malicious software and controlled as a group without the owners' knowledge, e.g. to send spam), without the owner of the device ever becoming aware. As fibre optic broadband becomes the norm and bandwidth grows exponentially, these devices become capable of being compromised, aggregated and directed at financial institutions resulting in detriment to consumers and, potentially, impact upon the markets through the loss of service availability.
As a regulator, we are not immune either. In February 2017 the internal systems of Polish Financial Supervision Authority (KNF) were compromised in an attempt to infiltrate Polish banks with malware. At the FCA, we have seen attempts to use the FCA brand in phishing campaigns against the UK financial sector. Such attacks are yet another example of creative cybercriminals leveraging diverse technologies to seed and propagate attacks across multiple financial institutions.
7fc3f7cf58