Virtual Keyboard Company Leaks Database of 30 Million Customers, Facebook's Facial Recognition Captcha and Facebook's Revenge Porn Prevention Program

5 views
Skip to first unread message

Infosec News

unread,
Dec 5, 2017, 3:30:33 PM12/5/17
to Infosec News

INFORMATION SECURITY NEWS

For The Week of 11/29-12/5 2017


The Information Security News Service is a project of LARS (Laboratory for Advanced Research in Systems) in the CS Department at the University of Minnesota Duluth. We send out top stories in information security every Tuesday (except during some academic breaks). If you have stories you’d like to see featured, please email them to infosec...@d.umn.edu.

CURRENT NEWS

Virtual Keyboard Company Leaks Database of 30 Milllion Users

Ai.Type a Tel Aviv-based startup utilized a misconfigured Mongo database allowing for public internet accessibility. The information stored is the most interesting part of this story as it highlights the amount of data mining going on behind the scenes with similar applications. Using the information entered in by their clients, Ai.Type were able to determine things such as contacts, phone numbers, google search queries, and much much more. This type of leakage should highlight the importance of permissions that users give their applications.

https://mackeepersecurity.com/post/virtual-keyboard-developer-leaked-31-million-of-client-records


Facebook’s New Captcha Requires Uploading “Clear Photo of Your Face”

Facebook is using a considering a new captcha that asks you to upload a “clear photo of your face” to verify your identity. They say that they’ll delete the photo after verification, but it seems like a disturbing trend. According to Facebook, the photo test is intended to “catch suspicious activity at various points of interaction on the site”.

https://www.wired.com/story/facebooks-new-captcha-test-upload-a-clear-photo-of-your-face/


Facebook Wants Your Nudes

Facebook is piloting a system in Australia that allows users to pre-emptively upload intimate content (featuring themselves) that they fear will be shared without their consent. A member of Facebook’s Community Operations team then reviews the image (to verify it’s actually content that should be prevented from being shared). Facebook then hashes it and prevents it from being uploaded or shared on their network. The user is then sent a message asking them to delete the content from the messenger thread on their phone, and Facebook deletes the image from their server.

https://www.theverge.com/2017/11/9/16630900/facebook-revenge-porn-defense-details


Reply all
Reply to author
Forward
0 new messages