Dr. T's security brief

9 views
Skip to first unread message

Daniel Tauritz

unread,
Mar 13, 2021, 7:48:54 AM3/13/21
to sec-...@googlegroups.com

Criminals Are Jumping on This Niche Programming Language to Write the Latest Malware
TechRadar
Mayank Sharma
March 1, 2021


Cybersecurity company Intezer warns that Google's open source Go programming language has become a popular tool for malware authors, having identified nearly 2,000% growth in new Go-based malware strains in the wild. Intezer's analysis noted that the TIOBE programming community index named Go 2016's Programming Language of the Year, which may have drawn malware writers' interest. Intezer also cited both state-sponsored and non-state sponsored threat actors as Go users, using it to create bots for direct denial-of-service attacks or installing cryptominers that constitute a large portion of current Linux malware written in Go. Intezer suggests Go's networking stack is favored by malefactors because it is a preferred language for writing cloud-native applications.

Full Article

 

 

SolarWinds Hack Pits Microsoft Against Dell, IBM Over How Companies Store Data
The Wall Street Journal
Aaron Tilley
February 27, 2021


Major technology companies are debating the safest data storage measures for customers following the SolarWinds hack, which compromised many U.S. government and corporate networks. Microsoft's argument that clients should use cloud computing systems is up against Dell Technologies and IBM's contention that a hybrid-cloud system is more secure. Government and industry experts think the suspected Russia-led exploit was conducted via networking company SolarWinds; Microsoft's Brad Smith backed the cloud migration solution in last week's House committee hearing, claiming his company identified on-premises systems as targets in the breach. Meanwhile, Paul Cormier at IBM subsidiary Red Hat said it is impractical to expect companies to move all their data to the cloud, as many must retain data on-premises for security or regulatory reasons. Dell's Deepak Patil added that "the reality is, look at a majority of customers, their workloads are running on-prem."

Full Article

*May Require Paid Registration

 

 

New York, IBM Begin Testing Covid-19 Digital Health Pass
ZDNet
Stephanie Condon
March 2, 2021


IBM and New York State have initiated the testing of their forthcoming blockchain-based Covid-19 digital health pass (Excelsior Pass), through which New Yorkers can securely display proof of a negative test result or vaccination certification. A group of predetermined participants on Saturday used the pass to gain admission to the Brooklyn Nets basketball game at Barclays Center, while on Tuesday volunteers used it to enter a hockey game at Madison Square Garden. The pass and its accompanying verification application are built on IBM's Digital Health Pass app, which employs blockchain to preserve privacy and let individuals store, manage, and share health status from mobile devices. Users can either print out or store their pass on smartphones, and each pass features a quick response code for venues to scan with the verification app. IBM's app also plugs into multiple data sources, and its open architecture will enable adoption by other states and organizations.

Full Article

 

 

 

Browser-Tracking Hack Works Even When You Flush Caches or Go Incognito
Ars Technica
Dan Goodin
February 19, 2021


Some websites are using a hack that thwarts anti-tracking countermeasures by exploiting the tiny icons that sites display in users' browser tabs and bookmark lists (favicons), according to a study by University of Illinois, Chicago (UIC) researchers. The researchers said most browsers cache images in a location independent of those that store site data, browsing history, and cookies; sites can load a series of favicons on visitors' browsers that flag them over an extended period of time. The UIC team said any website can deploy the attack workflow without user interaction or consent, even when popular anti-tracking extensions are implemented. In addition, the hack utilizes resources in the favicon cache even with incognito browsing engaged, due to improper isolation practices found in all major browsers.

Full Article

 

 

Security Flaw Detected for 2nd Time in Credit Cards
ETH Zurich (Switzerland)
Leo Hermann
February 22, 2021


A method for bypassing security measures to use certain credit and debit cards without a PIN code has been uncovered by researchers at Switzerland's ETH Zurich. Previously, the researchers had demonstrated that bypassing security was possible using Visa cards, while the new research shows security methods may be bypassed with Mastercard and Maestro cards by exploiting the data exchanged between the card and the card terminal. The method initially only worked with Visa cards, but the researchers were able to manipulate the payment process so the card terminal performed a Visa transaction and the card itself performed a Mastercard or Maestro transaction. The researchers informed Mastercard of their findings, after which the company updated the relevant safeguards.
 

Full Article

 

 

Electricity Needed to Mine Bitcoin is More Than Used by 'Entire Countries'
The Guardian (U.K.)
Lauren Aratani
February 27, 2021


The Cambridge Bitcoin Electricity Consumption Index estimated that the electricity used to mine bitcoins last year equaled the annual carbon footprint of Argentina. Bitcoin mining entails solving complex math problems in order to generate new bitcoins, with miners rewarded in the cryptocurrency; a maximum 21 million bitcoins can be mined, and the more that are mined, the tougher the algorithms that need solving to get bitcoins. Over 18.5 million bitcoins have been mined, and computers that can handle the intense processing power of the process are needed to get bitcoin. Environmentalists are concerned because, they say, bitcoin miners use the cheapest available source of electricity to power the process, even if that turns out to be coal. Bitcoin advocates believe bitcoin mining is a secure, inexpensive global value transfer and storage system that is worth the environmental cost.

Full Article

 

 

AI Panel Urges U.S. to Boost Tech Skills Amid China's Rise
Associated Press
Matt O'Brien
March 1, 2021


The National Security Commission on Artificial Intelligence (AI) issued its final report to Congress on March 1, calling on the U.S. to enhance its AI skills as a means of countering China. The 15-member commission—which includes executives from Google, Microsoft, Oracle, and Amazon—indicated that, with or without the U.S. and other democracies, machines that "perceive, decide, and act more quickly" and more accurately than humans will be used for military purposes. Despite warning against their unchecked use, the report does not support a global ban on autonomous weapons. It also recommends "wise restraints" on the use of facial recognition and other AI tools that could be used for mass surveillance and calls for "White House-led strategy" to defend against AI-related threats, set standards for the responsible use of AI, and increase research and development to maintain a technological edge over China.

Full Article

 

 

Bots Hyped GameStop on Major Social Media Platforms, Analysis Finds
Reuters
Michelle Price
February 26, 2021


Analysis by cybersecurity company PiiQ Media determined that bots on major social media platforms have been hyping up GameStop and other meme stocks, suggesting foreign actors' participation in the trading frenzy fueled by social news aggregation, Web content rating, and discussion website Reddit. PiiQ said it examined patterns of keywords like "Hold the Line" and GameStop's stock symbol (GME) across conversations and profiles from before the Jan. 28 trading frenzy through Feb. 18. Also identified were similar daily start-and-stop patterns in GameStop-related posts, with activity starting at the onset of the trading day, followed by a big spike at the close; PiiQ's Aaron Barr said such patterns are bot signatures. PiiQ estimated tens of thousands of bot accounts hyped GameStop, meme stocks, and the Dogecoin cryptocurrency; Barr expects to find a similar activity pattern on Reddit.

Full Article

 

 

Virtual Computer Chip Tests Expose Flaws, Protect Against Hackers
New Scientist
Matthew Sparkes
February 24, 2021


Researchers at the University of Michigan, Virginia Polytechnic Institute and State University, and Google have accelerated computer-chip testing by simulating chips and applying advanced software testing tools for analysis of the simulations. Virtual testing lets engineers utilize fuzzing, a method that monitors for unexpected results or crashes that can be reviewed and corrected. The researchers had to modify software fuzzers to run over time, rather than trigger a single input and wait for the response. This approach enabled a chip that would usually take 100 days to test to be analyzed in one day. The researchers think faster hardware testing could reduce development time and bring more reliable, more secure next-generation chips to market faster.
 

Full Article

*May Require Free Registration

 

 

Malware Now Targeting Apple's M1 Processor
Wired
Lily Hay Newman
February 17, 2021


Security researchers have identified malware customized to run on Apple's new M1 processors in the MacBook Pro, MacBook Air, and Mac Mini computers. Mac security researcher Patrick Wardle reported a Safari adware extension originally authored to run on Intel x86 chips has been redeveloped to target M1s. The GoSearch22 sample Wardle found masquerades as a legitimate Safari browser extension, then collects user data and posts illicit ads, including some linking to other malicious sites. Researchers with security firm Red Canary said they also are probing a strain of native M1 malware distinct from Wardle's discovery, adding that there is often a lag in detection rates as antivirus and other monitoring tools gather digital signatures for new types of malware.

Full Article

 

 

EU Report Warns AI Makes Autonomous Vehicles 'Highly Vulnerable' to Attack
VentureBeat
Khari Johnson
February 22, 2021


A report by the European Union Agency for Cybersecurity (ENISA) describes autonomous vehicles as "highly vulnerable to a wide range of attacks" that could jeopardize passengers, pedestrians, and people in other vehicles. The report identifies potential threats to self-driving vehicles as including sensor attacks with light beams, as well as adversarial machine learning (ML) hacks. With growing use of artificial intelligence (AI) and the sensors that power autonomous vehicles offering greater potential for attacks, the researchers advised policymakers and businesses to foster a security culture across the automotive supply chain, including third-party providers. The researchers suggested AI and ML systems for autonomous vehicles “should be designed, implemented, and deployed by teams where the automotive domain expert, the ML expert, and the cybersecurity expert collaborate."

Full Article

 

 

AI Here, There, Everywhere
The New York Times
Craig S. Smith
February 23, 2021


Researchers anticipate increasingly personalized interactions between humans and artificial intelligence (AI), and are refining the largest and most powerful machine learning models into lightweight software that can operate in devices like kitchen appliances. Privacy remains a sticking point, and scientists are developing techniques to use people's data without actually viewing it, or protecting it with currently unhackable encryption. Some security cameras currently use AI-enabled facial recognition software to identify frequent visitors and spot strangers, but networks of overlapping cameras and sensors can result in ambient intelligence that can constantly monitor people. Stanford University's Fei-Fei Li said such ambient intelligence "will be able to understand the daily activity patterns of seniors living alone, and catch early patterns of medically relevant information," for example.
 

Full Article

*May Require Paid Registration

Daniel Tauritz

unread,
Mar 15, 2021, 4:22:17 PM3/15/21
to sec-...@googlegroups.com

White House Cites 'Active Threat,' Urges Action Despite Microsoft Patch
Reuters
Jeff Mason
March 8, 2021


The White House has advised computer network operators to further efforts to determine whether their systems were targeted by an attack on Microsoft's Outlook email program, warning of serious vulnerabilities still unresolved. Although Microsoft issued a patch to correct flaws in Outlook's software, a back door that can allow access to compromised servers remains open; a White House official called this "an active threat still developing." A source informed Reuters that more than 20,000 U.S. organizations had been compromised by the hack, which Microsoft attributed to China; although for now only a small percentage of infected networks have been compromised via the back door, more attacks are anticipated. Said the White House official, "Patching and mitigation is not remediation if the servers have already been compromised, and it is essential that any organization with a vulnerable server take measures to determine if they were already targeted."

Full Article

 

 

Hackers Breach Thousands of Security Cameras, Exposing Tesla, Jails, Hospitals
Bloomberg
William Turton
March 9, 2021


Hackers say they have compromised data from as many as 150,000 surveillance cameras, including footage from electric vehicle company Tesla. An international hacking collective executed the breach to demonstrate the ease of exposing video surveillance by targeting camera data provided by enterprise security startup Verkada. In addition to footage from Tesla factories and warehouses, the hackers exposed footage from the offices of software provider Cloudflare, and from hospitals, schools, jails, and police stations. Tillie Kottmann, one of the hackers claiming credit for the breach, said the collective obtained root access to cameras, enabling them to execute their own code; they exploited a Super Admin account to access the cameras, and found a username and password for an administrator account online. A Verkada spokesperson said the company has disabled all internal administrator accounts to block unauthorized access.

Full Article

 

 

How to Spot Deepfakes? Look at Light Reflection in the Eyes
UB News Center
Melvin Bankhead III
March 10, 2021


A tool developed by University at Buffalo computer scientists can automatically identify deepfake photos of people by analyzing light reflections in their eyes for minute deviations. The tool exploits the fact that most artificial intelligence (AI)-generated images cannot accurately or consistently reflect the image of what the pictured person is seeing, possibly because many photos are combined to create the fake image. The tool first maps out each face, then analyzes the eyes, the eyeballs, and finally the light reflected in each eyeball. The tool was 94% effective in spotting deepfakes among portrait-like photos taken from actual images in the Flickr Faces-HQ dataset, as well as fake AI-generated faces from the www.thispersondoesnotexist.com repository.
 

Full Article

 

 

Bug Bounties: More Hackers Spotting Vulnerabilities Across Web, Mobile, IoT
ZDNet
Danny Palmer
March 9, 2021


HackerOne's 2021 Hacker Report reveals a 63% jump in the number of hackers submitting vulnerabilities to bug bounty programs during the last year. Earnings for ethical hackers disclosing vulnerabilities to the HackerOne bug bounty program more than doubled to $40 million in 2020, from $19 million in 2019. Most of the hackers focus on Web applications, but submissions of vulnerabilities associated with Android devices, Internet of Things devices, and application programming interfaces also increased last year. Said HackerOne's Jobert Abma, "We're seeing huge growth in vulnerability submissions across all categories and an increase in hackers specializing across a wider variety of technologies."
 

Full Article

 

 

Algorithm Helps AI Systems Dodge 'Adversarial' Inputs
MIT News
Jennifer Chu
March 8, 2021


Massachusetts Institute of Technology (MIT) researchers have developed a deep learning algorithm designed to help machines navigate real-world environments by incorporating a level of skepticism of received measurements and inputs. The team mated a reinforcement-learning algorithm with a deep neural network, each used separately to train computers in playing games like Go and chess, to support the Certified Adversarial Robustness for Deep Reinforcement Learning (CARRL) approach. CARRL outperformed standard machine learning techniques in tests using simulated collision-avoidance and the videogame Pong, even when confronted with adversarial inputs. MIT's Michael Everett said, "Our approach helps to account for [imperfect sensor measurements] and make a safe decision. In any safety-critical domain, this is an important approach to be thinking about."
 

Full Article

 

 

Study Reveals Extent of Privacy Vulnerabilities With Amazon's Alexa
North Carolina State University
March 4, 2021


Researchers at North Carolina State University identified a number of privacy concerns related to programs, or skills, run on Amazon's voice-activated assistant Alexa. The researchers used an automated program to collect 90,194 unique skills in seven different skill stores, and an automated review process to analyze each skill. They found Amazon does not verify the name of the developer responsible for publishing the skill, meaning an attacker could register under the name of a trustworthy organization. They also found multiple skills can use the same invocation phrase, so consumers might think they are activating one skill but are inadvertently activating and sharing information with another. In addition, the researchers found that developers can modify the code of their programs to request additional information after receiving Amazon approval. Moreover, nearly a quarter (23.3%) of 1,146 skills requesting access to sensitive data had misleading or incomplete privacy policies, or lacked them altogether.

Full Article

 

 

CredChain: Take Control of Your Own Digital Identity ... and Keep That Valuable Bitcoin Password Safe
UNSW Sydney Newsroom (Australia)
Neil Martin
March 12, 2021


The CredChain Self-Sovereign Identity platform architecture developed by researchers at Australia's University of New South Wales (UNSW) School of Computer Science and Engineering uses blockchain to create, share, and verify cryptocurrency credentials securely. UNSW's Helen Paik and Salil Kanhere said CredChain could offer Key Sharding, the process of splitting complicated passwords into meaningless shards stored in different locations that can only be validated when recombined. Kanhere said, "If or when the key is lost, the owner can present enough pieces of the keys to the system to prove his identity and recover the original." Paik said CredChain offers decentralized identity authority via the blockchain, and “also ensures that when a credential is shared, the user can redact parts of the credential to minimize the private data being shared, while maintaining the validity of the credential."
 

Full Article

 

 

Researchers Discover Privacy-Preserving Tools Leave Private Data Unprotected
NYU Tandon School of Engineering
March 3, 2021


At the New York University Tandon School of Engineering (NYU Tandon), researchers found machine learning frameworks underlying privacy preservation tools that use generative adversarial networks (GANs) designed by third parties are not very effective at safeguarding private data. Analysis determined that privacy-protecting GANs (PP-GANs) can be subverted to pass empirical privacy checks, while still permitting the extraction of secret information from "sanitized" images. NYU Tandon's Siddarth Garg said, "While our adversarial PP-GAN passed all existing privacy checks, we found that it actually hid secret data pertaining to the sensitive attributes, even allowing for reconstruction of the original private image." The researchers applied a novel steganographic approach to adversarially modify a state-of-the-art PP-GAN to conceal its user ID from sanitized face images, which could pass privacy checks with 100% secret recovery rate.

Full Article

 

 

Cybersecurity Researchers Build Better 'Canary Trap'
Dartmouth News
David Hirsch
March 1, 2021


The WE-FORGE data protection system developed by Dartmouth College cybersecurity researchers uses an artificial intelligence version of the "canary trap," in which multiple false documents are distributed to conceal secrets. The system uses natural language processing to automatically generate false documents to protect intellectual property. WE-FORGE also adds an element of randomness, to keep adversaries from easily identifying actual documents. The algorithm computes similarities between concepts in a document, analyzes each word's relevance, then sorts concepts into "bins" and computes a feasible candidate for each group. Dartmouth's V.S. Subrahmanian said, "The system produces documents that are sufficiently similar to the original to be plausible, but sufficiently different to be incorrect."

Full Article

 

 

Google to Stop Tracking Users for Targeted Ads
The Hill
Rebecca Klar
March 3, 2021


Google will no longer track users across Internet searches to sell targeted advertising, and will not build alternative user-tracking models. The company in 2020 pledged to phase out the use of third-party cookies within two years, as part of its Privacy Sandbox initiative to develop standards to improve online privacy. Google's David Temkin said in Wednesday's announcement that Google products will be powered by "privacy-preserving [application programming interfaces] which prevent individual tracking while still delivering results for advertisers and publishers." He cited data Google issued in January demonstrating a method to "effectively" remove third-party cookies from advertising, by "clustering" communities with similar interests, rather than specific individuals.

Full Article

 

 

Cutting Off Stealthy Interlopers: A Framework for Secure Cyber-Physical Systems
Daegu Gyeongbuk Institute of Science and Technology (South Korea)
February 25, 2021


Researchers at South Korea's Daegu Gyeongbuk Institute of Science and Technology (DGIST) engineered a cyber-physical system (CPS) framework incorporating real-time cyberattack detection and recovery capabilities. The framework counters pole-dynamics attacks, in which hackers connect to a node in the CPS network and feed it false sensor data, which can cause physical actuators to misbehave. The DGIST team applied software-defined networking (SDN) to make the CPS network more dynamic by distributing signal relays via controllable SDN switches; an attack-detection algorithm in the switches can alert the centralized network manager if false sensor data is being injected. The compromised nodes are severed after the network manager is flagged, and a new safe path for sensor data deployed.

Full Article

 

Daniel Tauritz

unread,
Mar 24, 2021, 7:23:23 PM3/24/21
to sec-...@googlegroups.com

Malware was Written in an Unusual Programming Language, to Stop It From Being Detected
ZDNet
Danny Palmer
March 11, 2021


Researchers at cybersecurity firm Proofpoint have determined a hacking group known as TA800 is distributing new malware written in the Nim programming language, in order to make it harder to detect. The NimzaLoader malware, distributed via phishing emails that connect to a fake PDF downloader, is intended to give hackers access to Windows computers and the ability to execute commands on them. Proofpoint's Sherrod DeGrippo said, "TA800 has often leveraged different and unique malware, and developers may choose to use a rare programming language like Nim to avoid detection, as reverse-engineers may not be familiar with Nim's implementation or focus on developing detection for it, and therefore tools and sandboxes may struggle to analyze samples of it."

Full Article

 

U.S. Grid at Rising Risk to Cyberattack, Says GAO
The Hill
Zack Budryk
March 18, 2021


An analysis by the U.S. Government Accountability Office (GAO) determined that distribution systems within the country's electrical grid are increasingly vulnerable to cyberattack, "in part due of the introduction of and reliance on monitoring and control technologies." The GAO found this vulnerability is growing because of industrial control systems, which increasingly are accessed remotely. The study said the systems overall are not covered by federal cybersecurity standards, but in some instances have taken independent action based on those standards. The report urges the Secretary of Energy to work with state officials, industry figures, and the Department of Homeland Security to better mitigate distribution system risks.

Full Article

 

Researchers Blur Faces That Launched a Thousand Algorithms
Wired
Will Knight
March 15, 2021


Privacy concerns prompted the researchers who manage ImageNet to blur every human face within the dataset to determine whether doing so would affect the performance of object-recognition algorithms trained on the dataset. ImageNet features 1.5 million images with about 1,000 labels, but only 243,198 images were blurred. The researchers blurred the faces using Amazon's AI service Rekognition and found it did not impact the performance of several object-recognition algorithms trained on ImageNet. Princeton University's Olga Russakovsky said, "We hope this proof-of-concept paves the way for more privacy-aware visual data collection practices in the field." However, Massachusetts Institute of Technology's Aleksander Madry said training an AI model on a dataset with blurred faces could have unintended consequences; said Madry, "Biases in data can be very subtle, while having significant consequences."

Full Article

 

Cybersecurity Report: 'Smart Farms' Are Hackable Farms
IEEE Spectrum
Payal Dhar
March 15, 2021


Researchers at China's Nanjing Agricultural University (NAU) surveyed smart farming and its underlying technologies and utilities, and discovered unique cybersecurity issues stemming from agricultural Internet of Things (IoT) applications. Possible threats to IoT integrity include facility damage, sensor failures in poultry and livestock breeding, and control system intrusions in greenhouses. NAU's Xing Yang said the most pressing vulnerability in smart agriculture concerns the physical environment, like plant factory control system intrusion and unmanned aerial vehicle false positioning; for example, rural areas are prone to poor network signals, which Yang said leads to false base station signals. Yang and his colleagues suggested the use of countermeasures in response, including artificial intelligence to detect malicious users, and the application of existing industrial security standards to design a targeted security framework for agricultural IoT.

Full Article

 

 

Hackers Act Differently if Accessing Male or Female Facebook Profiles
New Scientist
Chris Stokel-Walker
March 10, 2021


University of Vermont and Facebook researchers found that hackers on the social media platform display different behavior depending on the age and gender listed on the hacked Facebook account. The researchers created 1,008 realistic Facebook accounts and leaked the login details for 672 of them on websites used by hackers to trade compromised credentials. They used the other accounts to populate the friendship groups of the leaked accounts to monitor them over a six-month period. The researchers found that 46% of the leaked accounts were accessed 322 times combined. They also determined that hackers messaged the friends of younger profiles more than those of older profiles, and that in many cases male accounts—but never female accounts—were vandalized.

Full Article

*May Require Paid Registration

 

 

License-Plate Scans Aid Crime-Solving But Spur Little Privacy Debate
The Wall Street Journal
Byron Tau
March 10, 2021


Law enforcement agencies increasingly are using data gathered by the vast network of automated license-plate scanners to solve crimes. The scanners initially were placed on telephone poles, police cars, toll plazas, bridges, and in parking lots but now can be found on tow trucks and municipal garbage trucks as well. License-plate scans were instrumental in the arrests of several suspected rioters at the U.S. Capitol. However, there are concerns about abuse, misidentification, and the scope of data collection, given that, for instance, some systems read a plate's number but not its state. Electronic Frontier Foundation's Dave Maass said, "License-plate readers are mass surveillance technology. They are collecting data on everyone regardless of whether there is a connection to a crime, and they are storing that data for long periods of time."

Full Article

*May Require Paid Registration

 

 

California Passes Regulation Banning 'Dark Patterns' Under Landmark Privacy Law
Gizmodo
Brianna Provenzano
March 15, 2021


New rules enacted under California's Consumer Privacy Act (CCPA) will bar so-called dark patterns, or underhanded practices used by websites or applications to get users to behave atypically. Examples include website visitors suddenly being redirected to a subscription page, even when they have no interest in the product being marketed. According to an infographic from the California Attorney General's office, dark-pattern strategies rely on "confusing language or unnecessary steps such as forced clicking or scrolling through multiple screens or listening to why you shouldn't opt out of their data sale." The new CCPA regulations will further add a Privacy Options icon, which Internet users can use as a visual cue to opt out of the sale of their personal data.

Full Article

 

NYU Engineering Researchers Discover Privacy-Preserving Tools Leave Private Data Unprotected

Tech Xplore Share to FacebookShare to Twitter (3/3) reports that “machine-learning (ML) systems are becoming pervasive not only in technologies affecting our day-to-day lives, but also in those observing them.” Companies that “make and use such widely deployed services rely on so-called privacy preservation tools that often use generative adversarial networks (GANs), typically produced by a third party to scrub images of individuals’ identity.” Researchers at the NYU Tandon School of Engineering, “who explored the machine-learning frameworks behind these tools,” found that they leave private data unprotected. In the paper “Subverting Privacy-Preserving GANs: Hiding Secrets in Sanitized Images,” a team “led by Siddharth Garg, Institute Associate Professor of electrical and computer engineering at NYU Tandon, explored whether private data could still be recovered from images that had been ‘sanitized’ by such deep-learning discriminators as privacy protecting GANs (PP-GANs) and that had even passed empirical tests.” The team “found that PP-GAN designs can, in fact, be subverted to pass privacy checks, while still allowing secret information to be extracted from sanitized images.”

 

Report: Ransomware, Data Breaches Are Top Security Threats At Colleges

Higher Ed Dive Share to FacebookShare to Twitter (2/24, Schwartz) reports that “ransomware is the top security threat at higher education institutions, according to a new report from cybersecurity services firm BlueVoyant.” The research was “based on open-source data, including an automated analysis of threat searches across thousands of colleges worldwide.” Ransomware attacks “on colleges doubled from 2019 to 2020, costing an institution $447,000 on average.” Clop, Ryuk, NetWalker and DoppelPaymer were the “primary ransomware families targeting education institutions.” Data breaches “accounted for half of the security incidents colleges dealt with in 2019, according to the report.” Nation-state activity “leading to data theft impacted more than 200 institutions over the last two years, it found.”

 

In Testimony Before Senate Panel, Tech Executives Defend Their Actions In SolarWinds Hack

Reuters Share to FacebookShare to Twitter (2/23, Satter, Menn) reports in testimony before U.S. Senate’s Select Committee on Intelligence Tuesday, “top executives” from “SolarWinds Corp, Microsoft Corp and cybersecurity firms FireEye Inc and CrowdStrike Holdings Inc defended their conduct in breaches blamed on Russian hackers and sought to shift responsibility elsewhere.” Reuters says, “The executives argued for greater transparency and information-sharing about breaches, with liability protections and a system that does not punish those who come forward, similar to airline disaster investigations.”

        The Guardian (UK) Share to FacebookShare to Twitter (2/23, Paul) reports that Microsoft President Brad Smith “said its researchers believed ‘at least 1,000 very skilled, very capable engineers’ worked on the SolarWinds hack. ‘This is the largest and most sophisticated sort of operation that we have seen,’ Smith told senators.” CNET News Share to FacebookShare to Twitter (2/23, Hautala) reports that “still unknown is whether the hackers carried out similar attacks on software vendors other than SolarWinds, creating more than one back door for their victims to unwittingly install on their own systems. Hackers also could have used more rudimentary approaches to breach target systems, including phishing or guessing passwords for administrator accounts with high levels of access to company systems.”

 

Daniel Tauritz

unread,
Mar 28, 2021, 11:14:59 AM3/28/21
to sec-...@googlegroups.com

Newly-Wormable Windows Botnet Ballooning in Size
TechCrunch
Zack Whittaker
March 23, 2021


Amit Serper and Ophir Harpaz at Israeli security firm Guardicore say a botnet targeting Windows devices is expanding, due to a new infection method that lets malware spread between computers with weak passwords. The Purple Fox malware attempts to guess Windows user account passwords by targeting the server message block that allows Windows to communicate with other devices. Upon infiltration, Purple Fox pulls a malicious payload from a network of nearly 2,000 compromised Windows Web servers and installs a rootkit, keeping the malware latched on to the computer while complicating its detection or removal. It then seals the firewall ports through which it gained access, and produces a list of Internet addresses and scans the Internet for other targets. Guardicore said Purple Fox infections have soared 600% since May 2020.

Full Article

 

 

'Expert' Hackers Used 11 Zerodays to Infect Windows, iOS, Android Users
Ars Technica
Dan Goodin
March 18, 2021


Google's Project Zero security researchers warned that a team of hackers used no fewer than 11 zeroday vulnerabilities over nine months, exploiting compromised websites to infect patched devices running the Windows, iOS, and Android operating systems. The group leveraged four zerodays in February 2020, and their ability to link multiple zerodays to expose the patched devices prompted Project Zero and Threat Analysis Group analysts to deem the attackers "highly sophisticated." Project Zero's Maddie Stone said over the ensuing eight months the hackers exploited seven more previously unknown iOS zerodays via watering-hole attacks. Blogged Stone, "Overall each of the exploits themselves showed an expert understanding of exploit development and the vulnerability being exploited."

Full Article

 

FBI Warns Of Increase In Ransomware Attacks Targeting Colleges

Inside Higher Ed Share to FacebookShare to Twitter (3/18, McKenzie) reports that “a group of cybercriminals is increasingly targeting colleges, schools and seminaries and attempting to extort them, the FBI’s Cyber Division has warned.” In an advisory to cybersecurity professionals and system administrators “published Tuesday, the FBI said that criminals are leveraging software called PYSA ransomware to access IT networks, block access to vital information and systems through encryption, and demand payment to restore access.” In a double-extortion tactic that “has also been employed by criminals using other types of ransomware, the criminals are not only requesting payment in exchange for making encrypted data accessible again.” They are also “threatening to sell sensitive information such as Social Security numbers on the dark web if institutions or affected individuals do not meet demands.”

 

 

France's Competition Authority Declines to Block Apple's Opt-in Consent for iOS App Tracking
TechCrunch
Natasha Lomas
March 17, 2021


France's competition authority (FCA) has rejected calls by French advertisers to block looming pro-privacy changes requiring third-party applications to obtain consumers’ consent before tracking them on Apple iOS. FCA said it does not currently deem Apple's introduction of the App Tracking Transparency (ATT) feature as abuse of its dominant position. However, the regulator is still probing Apple "on the merits," and aims to ensure the company is not applying preferential rules for its own apps compared to those of third-party developers. An Apple spokesperson said, "ATT will provide a powerful user privacy benefit by requiring developers to ask users' permission before sharing their data with other companies for the purposes of advertising, or with data brokers. We firmly believe that users' data belongs to them, and that they should control when that data is shared, and with whom."

Full Article

 

 

Tesla Interior Cameras Threaten Driver Privacy, Consumer Reports Says
CNet
Sean Szymkowski
March 24, 2021


Consumer Reports (CR) says in-cabin cameras that electric vehicle manufacturer Tesla incorporates into driver-assist systems can threaten driver privacy. The cameras record and transmit footage from within the vehicle, and the CR report warns drivers who do not opt out of the program they are giving Tesla access to sensitive information. Inside Tesla's Model 3 and Model Y, the camera can record moments before an automatic emergency braking event or before a crash, and it is possible the car shares this content with Tesla; other automakers employ closed-loop systems that do not transmit or save data, much less record drivers in the vehicle. Despite safeguards on who can access this footage, CR says the possibility exists that anyone, including malefactors, can access it.
 

Full Article

 

Researchers Say AI Tools Can Be Fooled By A Written Word

The Verge Share to FacebookShare to Twitter (3/8, Vincent) reported OpenAI researchers “have discovered that their state-of-the-art computer vision system can be deceived by tools no more sophisticated than a pen and a pad,” as “simply writing down the name of an object and sticking it on another can be enough to trick the software into misidentifying what it sees.” OpenAI researchers wrote in a blog post, “We refer to these attacks as typographic attacks.” The post continued, “By exploiting the model’s ability to read text robustly, we find that even photographs of hand-written text can often fool the model.”

Reply all
Reply to author
Forward
0 new messages