Europe Opens AI 'Crash Test' Centers
Bloomberg
Sanne Wass
June 27, 2023
The European Union has launched four artificial intelligence (AI) facilities to test and validate the safety of innovations prior to their market rollout. The virtual and physical sites will offer a testbed for AI and robotics in real-world manufacturing, healthcare, agricultural, and urban environments starting next year. The Technical University of Denmark said the facilities would function as a "safety filter" between European technology providers and users while complementing public policy. The university described the facilities as a digital version of Europe's crash test system for new automobiles.
*May Require Paid Registration
AI Fake Victims Disrupt Criminal Business Model
The Lighthouse (Macquarie University, Australia)
Fran Molloy
June 26, 2023
The Apate multilingual chatbot created by cybersecurity experts at Australia's Macquarie University could masquerade as intended victims of scam callers as part of an effort to undermine their profitability. Apate uses authentic-sounding voice clones to engage in dialogue and "scam the scammers." The researchers analyzed bogus phone calls to extract scammers' social engineering methods, identifying scam "scripts" via machine learning and natural language processing before training Apate to compose its own conversations. Macquarie's Dali Kaafar said these systems "can fool scammers into thinking they are talking to viable scam victims, so they spend time attempting to scam the bots."
AI's Use in Elections Sets Off a Scramble for Guardrails
The New York Times
Tiffany Hsu; Steven Lee Myers
June 25, 2023
Artificial intelligence (AI)-generated political campaign materials designed to stoke anxiety have spurred demands for safeguards from consultants, election researchers, and lawmakers. In the run-up to the 2024 presidential race, the Republican National Committee issued a video with synthetic dystopian images associated with a Biden victory; the Democrats found AI-drafted fundraising messages often encouraged more engagement and donations than human-written copy. Election advocates are urging legislation to regulate synthetically produced ads, as social media rules and services that purport to police AI content have fallen short. A group of Democratic lawmakers has proposed legislation requiring disclaimers to accompany political ads with AI-generated material, and the American Association of Political Consultants said using deepfake content in political campaigns constitutes an ethics code violation.
*May Require Paid Registration
U.S.-Based Generative AI Job Postings Up 20% in May
Reuters
Chavi Mehta
June 22, 2023
The job portal Indeed reported a gain of about 20% in the number of generative artificial intelligence (AI)-related job postings to 204 per million in May, more than double the number of postings in the same month of 2021. Of the AI job postings on Indeed's U.S. platform, 5% were for data scientists. Also found to be in demand were software engineers, machine learning engineers, and data engineers. Indeed's Nick Bunker said, "There has been a notable increase in job seeker interest in AI-related jobs, especially since the introduction of ChatGPT." However, Indeed reported a 43.6% overall decrease in U.S. tech jobs from June 2022. The platform reported interest in AI jobs surpassed availability. Last month, searches for generative AI jobs totaled 147 per million total jobs searched, up from virtually zero in May 2022.
How Big Tech Embraced Disabled Users
France 24
June 21, 2023
Big tech companies increasingly are rolling out technologies that aim to help disabled users. Apple's Live Speech, for instance, recreates a user's voice using artificial intelligence (AI), allowing those with speech issues to have typed messages read aloud in their natural voices. Meanwhile, an updated version of Google's Lookout app, which uses AI to describe images to visually impaired users, will be able to identify objects without labels. Representatives of both companies said at a recent tech event in Paris that accessibility is a priority. Other companies also are working to help those with sensory impairments; Microsoft’s SeeingAI describes photos for visually impaired people, while French firm Sonar Vision is developing technology to guide visually impaired people around cities, and the company Equally AI is harnessing ChatGPT to improve the accessibility of websites.
Mercedes Bringing ChatGPT into its Cars
CNN Business
Peter Valdes-Dapena
June 15, 2023
German automaker Mercedes-Benz has partnered with Microsoft to add ChatGPT generative artificial intelligence software to Mercedes-Benz cars in the U.S. Microsoft said ChatGPT would make the vehicles' voice-command capability smoother by supporting more natural-seeming dialogue. The system will be able to recall the context of discussions and engage in back-and-forth conversation with the driver or occupants. Microsoft said the chatbot will allow the system to respond to more diverse requests, including those not related to the car or driver, and interact with other functions, like buying movie tickets. U.S.-based Mercedes owners whose vehicles include the MBUX infotainment system have been able to beta-test ChatGPT since June 16.
92% of Programmers Use AI Tools: Survey
ZDNet
Steven Vaughan-Nichols
June 14, 2023
A recent survey by GitHub found that 92% of U.S.-based developers use artificial intelligence (AI) coding tools, with only 6% using them solely outside of work. Of the 500 U.S.-based developers polled, 70% said their code has benefited significantly from AI. The respondents said AI coding tools are useful in achieving performance standards with better code quality, faster outputs, and fewer production-level issues. However, AI code appears to be a means to an end for developers, as the survey found that they “want to upskill, design solutions, get feedback from end users, and be evaluated on their communication skills." Said GitHub's Inbal Shani, "Engineering leaders will need to ask whether measuring code volume is still the best way to measure productivity and output."
Welcome to White Castle. Would You Like Human Interaction with That?
The Wall Street Journal
Heather Haddon
June 13, 2023
Fast-food chains like White Castle are deploying artificial intelligence-enabled chatbots in drive-throughs, with restaurant executives claiming the technology can boost efficiency by allowing staff to perform other jobs. The bots "talk" through drive-through speakers, with the order count displayed on a screen while workers monitoring on headsets prepare to step in if things go wrong. Across 10 orders on a recent day, three customers at a White Castle in Merrillville, IN, asked to talk to a person because the Julia drive-through chatbot misheard orders or because they preferred human interaction. California-based Presto Automation is training chatbots for this labor, testing personalization with custom voices.
*May Require Paid Registration
Computer Vision Technique Enhances Microscopy Image Analysis for Cancer Diagnosis
University of Michigan Computer Science and Engineering
June 9, 2023
Researchers at University of Michigan (U-M) and its Michigan Medicine health care complex have developed a computer vision learning technique that analyzes microscopic images of tumors using artificial intelligence (AI) and machine learning to make cancer diagnoses. The HiDisc tool analyzes multiple patches of a single tumor; it also relies less on data augmentation to transform and label images. In tests against state-of-the-art self-supervised learning methods for diagnosing cancer, the researchers observed an almost 4% improvement in image classification compared to the best-performing baseline method. U-M's Cheng Jiang said, "Our goal is to use AI to help clinicians make a diagnosis based on these tumor images virtually instantaneously during surgery, while the surgeon is waiting."
Wisconsin State Journal (6/10, Rickert) reported UW-Madison assistant professor Yonatan Mintz “is working on a project to make diabetes treatment more efficient and effective in poor urban areas in India.” He said a “computer-based board game he’s helped develop is intended to shed light on the different ways machines and humans learn,” and it “could eventually lead to ways humans and AI can work together to solve problems, like in emergencies when time is precious.” Additionally, Mintz is “worried less about the ‘Terminator problem’ – that artificial intelligence could one day escape human control and take over the world, a la the ‘Terminator’ movie franchise – than about the more imminent and already-known potential dangers of AI.” For example, he believes “there are plenty of more practical, less existential problems to address in the advent of artificial intelligence, but also favors the notion of including a ‘kill switch’ with certain AI systems.”
In a Future In Five feature for Politico (6/9, Kern), Rebecca Kern spoke with Jennifer Chayes, the dean of University of California, Berkeley’s new College of Computing, Data Science and Society. She’s interested in “using large language models like ChatGPT to process new datasets to solve pressing problems like climate change,” as well as “expanding racial and gender diversity in STEM.” During their discussion, Chayes noted that an “underrated big idea” is AI for science. She said, “AI for science – for which we’re developing new techniques for areas with relatively sparse data – is going to transform biomedicine and health.” She added that it will “transform climate and sustainability,” as well as “the platforms on which we allocate resources in public health, in welfare and in other areas.”
Reuters (6/8) reported Meta Platforms “gave employees a sneak peek at a series of AI tools it was building, including ChatGPT-like chatbots planned for Messenger and WhatsApp that could converse using different personas.” Company executives “speaking at an all-hands meeting also demonstrated a coming Instagram feature that could modify user photos via text prompts and another that could create emoji stickers for messaging services, according to a summary of the session provided by a Meta spokesperson.” Meta Chief Executive Mark Zuckerberg “told employees at the session on Thursday that advancements in generative AI in the last year had now made it possible for the company to build the technology ‘into every single one of our products.’”
Bloomberg (6/8, Subscription Publication) reported that at the meeting, Zuckerberg “sought to reassure employees about the company’s strategy, especially its emphasis on artificial intelligence, just two weeks after it finished the latest round of job cuts.” Bloomberg adds, “It’s a fraught time at Meta. Management jettisoned 10,000 employees in a drawn-out firing process that left the Menlo Park, California-based company without a tech road map and shook employee confidence in the direction for the business, people familiar with the matter have said.”
Also reporting are CNBC (6/8, Vanian), The Guardian (UK) (6/9), and Fortune (6/8).
Bloomberg (6/8, Grant, Subscription Publication) reported Microsoft said Thursday that it “will create a program to assure customers the artificial intelligence software they buy from the company will meet any future laws and regulations, looking to keep clients investing in AI tools ahead of whatever rules are passed governing the new technology.” According to Bloomberg, the company said it “will help clients manage regulatory issues stemming from AI applications they deploy with Microsoft, convene customer councils on the issues and continue its engagement with lawmakers ‘to promote effective and interoperable AI regulation.’”
VentureBeat (6/8) reported , “Yesterday, Microsoft announced its new Azure OpenAI Service for government. Today, the tech giant unveiled a new set of three commitments to its customers as they seek to integrate generative AI into their organizations safely, responsibly, and securely.” The commitments include “sharing its learnings about developing and deploying AI responsibly,” “creating an AI assurance program,” and “supporting customers as they implement their own AI systems responsibly.”
Reuters (6/9, Bartz) reported the Senate on Thursday “introduced two separate bipartisan artificial intelligence bills...amid growing interest in addressing issues surrounding the technology.” One bill “would require the U.S. government to be transparent when using AI to interact with people and another would establish an office to determine if the United States is remaining competitive in the latest technologies.” The introduction of laws addressing AI has become center-stage in political media as software such as ChatGPT has become larger in the cultural conversation.
Education Week (6/9, Klein) reported “artificial intelligence persona chatbots” can make “extraordinary conversations possible, at least technically.” However, “many of the tools spit out inaccuracies right alongside verifiable facts, feature significant biases, and appear hostile or downright creepy in some cases, educators and experts who have examined the tools point out.” For example, Micah Miner, the director of instructional technology for a Chicago-area school district, “worries the bots could reflect the biases of their creators. A James Madison chatbot programmed by a left-leaning Democrat could give radically different answers to students’ questions about the Constitution than one created by a conservative Republican, for instance.” Miner added “one big exception: He sees great potential in persona bots if the lesson is exploring how AI itself works.”
Nexstar (6/12, Elbein) reports, “According to research published” online in the Journal of Applied Psychology “by the American Psychological Association, artificial intelligence” (AI) “researchers are risking their mental and emotional health and are more likely to suffer from insomnia and to drink more after work.” The study, which was “based on surveys of about 800 office workers around the world,” examined “the impacts on workers of an ongoing ‘Fourth Industrial Revolution,’ spawned by AI.”
In an interview with BNN Bloomberg (CAN) (6/12, Johnson), Cohere President and COO Martin Kon “said the development of generative artificial intelligence will bring about fundamental changes.” Kon explained, “I think what we’re seeing in generative AI, this is a once [in] every 15, perhaps 30-year transformation in how humans and computers interact.” Kon added, “We are standing right before, or starting down, a very similar massively transformational point in time. This is going to be similar to [the] steam engine’s impact on physical labour, in terms of the impact on intellectual labour and that’s extremely exciting.”
The Wall Street Journal (6/12, Stupp, Subscription Publication) reports Nathaniel Fick, Ambassador-at-Large for Cyberspace and Digital Policy, told reporters on Monday that draft EU legislation on artificial intelligence could hurt the development of the industry in Europe. Fick said, “I have a concern that the impulses of some European political leaders will create a scenario where all the globe-leading AI companies will also be American as the cloud-computing companies are.”
The Wall Street Journal (6/13, Dotan, Seetharaman, Subscription Publication) reports that Microsoft and OpenAI’s partnership, one of the most noteworthy in the contemporary tech industry, has also produced behind-the-scenes confusion and conflict.
Engadget (6/13) reports, “OpenAI warned Microsoft early this year about rushing the integration of GPT-4 into Bing without further training, according to The Wall Street Journal. Although Microsoft forged ahead anyway, the alert proved prescient as early users noticed ‘unhinged’ behavior in the Bing AI tool.” Engadget says, “The WSJ describes the arrangement as an ‘open relationship’ where Microsoft maintains significant influence without complete control. For example, although the agreement limits OpenAI’s search-engine customers, it’s still free to work with Microsoft’s rivals. That can place the two companies in precarious situations like their sales teams making overlapping pitches to the same customers. In addition, Microsoft employees have reportedly complained about diminished in-house AI spending and a lack of direct access to OpenAI’s models for its researchers and engineers.”
Microsoft CEO Discusses AI Initiatives. In an interview with Wired (6/13, Levy), Microsoft CEO Satya Nadella discusses the company’s partnership with OpenAI, integrating generative AI copilots into many of its products, and AI challenges. Nadella said Microsoft partnered with OpenAI rather than relying solely on its own products because “I felt OpenAI was going after the same thing as us. So instead of trying to train five different foundational models, I wanted one foundation, making it a basis for a platform effect. So we partnered.” Nadella said AI has the potential to transform the search industry and others, adding that he is “excited about” AI’s potential for Microsoft.
The Chronicle of Higher Education (6/14, Campbell) reports that when three University of California at Berkeley teaching assistants and a professor “suspected some students of using ChatGPT in their survey course on the history of architecture,” they would soon learn “that students had used it in a range of ways.” The colleagues “flagged 13 of their 125 students for using AI-generated text,” which often contained content “not covered in class, was grammatically correct but lacked substance and creativity, and used awkward phrasing, such as adding ‘thesis statement’ before turning out a generic thesis.” They then “told everyone that they had conducted an in-depth review of submissions and would allow students to redo the assignment without penalty if they admitted to using ChatGPT. All but one of the 13 came forward, plus one who had not been flagged.”
Since these “user-friendly programs first appeared in late November, faculty members have wrestled with many new questions even as they try to figure out how the tools work.” For example, “Should a student caught cheating with AI be punished because they passed work off as their own, or given a second chance, especially if different professors have different rules and students aren’t always sure what use is appropriate?”
CNBC (6/13, Field, Feiner) reports, “Google and OpenAI, two U.S. leaders in artificial intelligence, have opposing ideas about how the technology should be regulated by the government, a new filing reveals.” On Monday, Google “submitted a comment in response to the National Telecommunications and Information Administration’s request about how to consider AI accountability at a time of rapidly advancing technology.” Google said it preferred a “multi-layered, multi-stakeholder approach to AI governance.” OpenAI CEO Sam Altman has previously spoken in favor of creating a single agency focused on regulating the technology.
Reuters (6/13, ) reports, “Meta Platforms said on Tuesday that it would provide researchers with access to components of a new ‘human-like’ artificial intelligence model that it said can analyze and complete unfinished images more accurately than existing models.” According to Meta, its I-JEPA model “uses background knowledge about the world to fill in missing pieces of images, rather than looking only at nearby pixels like other generative AI models.”
The New York Times (6/14, Lu) reports, “‘Generative artificial intelligence’ is set to add up to $4.4 trillion of value to the global economy annually, according to a report from McKinsey Global Institute, in what is one of the rosier predictions about the economic effects of the rapidly evolving technology.” The McKinsey report said, “Generative A.I. has the potential to change the anatomy of work, augmenting the capabilities of individual workers by automating some of their individual activities.” McKinsey Partner and report author Lareina Yee “acknowledged that the report was making prognostications about A.I.’s effects, but that ‘if you could capture even a third’ of what the technology’s potential is, ‘it is pretty remarkable over the next five to 10 years.’”
Forbes (6/13, Hart) reports that “a new Beatles song will be released later this year with a little help from artificial intelligence, musician Paul McCartney announced Tuesday, the latest example of how creative industries are using the fast-evolving technology as lawmakers and regulators begin to grapple with the ethical and legal issues it raises.” McCartney said “the final Beatles record” had been completed with the help of AI and would come out later this year. He “said the technology had been used to ‘extricate’ the voice of the late John Lennon from an old demo tape.” McCartney “said the technology was able to extract Lennon’s voice from a ‘ropey little bit of cassette’ and isolate his vocals from instruments on the recording.”
Science (6/14, Service) reports on concerns over the potential of artificial intelligence to assist in the development of dangerous viruses. Kevin “Esvelt, a biosecurity expert at the Massachusetts Institute of Technology, recently asked students to create a dangerous virus with the help of ChatGPT or other so-called large language models” and “after only an hour, the class came up with lists of candidate viruses, companies that could help synthesize the pathogens’ genetic code, and contract research companies that might put the pieces together.” While “Esvelt doubts that the specific suggestions made by the chatbots pose much of a pandemic threat,” he nonetheless “believes the experiment underscores how AI and other tools could make it easier for would-be terrorists to unleash new threats.” To reduce that threat, “limiting the information that chatbots and other AI engines can use as training data could help, Esvelt thinks.”
The New York Times (6/14, Satariano) reports that the EU “took an important step on Wednesday toward passing what would be one of the first major laws to regulate artificial intelligence, a potential model for policymakers around the world as they grapple with how to put guardrails on the rapidly developing technology.” The European Parliament “passed a draft law known as the A.I. Act, which would put new restrictions on what are seen as the technology’s riskiest uses.” The legislation “would severely curtail uses of facial recognition software, while requiring makers of A.I. systems like the ChatGPT chatbot to disclose more about the data used to create their programs.” This “vote is one step in a longer process” and “a final version of the law is not expected to be passed until later this year.”
The Washington Post (6/14, A1, Zakrzewski, Lima) reports that “the legislation takes a ‘risk-based approach,’ introducing restrictions based on how dangerous lawmakers predict an AI application could be.” The legislation “would ban tools that European lawmakers deem ‘unacceptable,’ such as systems allowing law enforcement to predict criminal behavior using analytics.” In addition, “it would introduce new limits on technologies simply deemed ‘high risk, ‘such as tools that could sway voters to influence elections or recommendation algorithms, which suggest what posts, photos and videos people see on social networks.”
NBC News (6/14, Khogeer) reports, “As millions of users experiment with ChatGPT, some people are turning to the generative artificial intelligence chatbot for workout advice and what they say is a cheap alternative to a personal trainer.” However, “some trainers say they can’t be replaced that easily, and that taking workout advice from a chatbot could have some unexpected consequences.” In particular, “using AI for nutrition and health-related advice is already under scrutiny,” and “Some medical experts warned that AI chatbots shouldn’t replace consulting with real-life health professionals.” Meanwhile, amid “the online discourse about replacing trainers with AI, some personal trainers were optimistic, rather than fearful, about the future role of AI in their careers and the fitness industry at large.”
TechCrunch (6/13, Lomas) reports Google has “delayed a planned launch of its generative AI chatbot, Bard, in the European Union this week, according to the Irish Data Protection Commission (DPC) – the tech giant’s lead data protection authority in the region.” The development “comes long after OpenAI launched a free research preview (November 2022) of its rival chatbot, ChatGPT, without applying limits on where in the world Internet users could access it.” DPC Deputy Commissioner Graham Doyle “said today that Google ‘recently’ informed the authority of its intention to launch Bard in the EU ‘this week.’ However he said it had not provided the regulator with adequate information ahead of the planned date and a launch would not now happen in the intended timeframe.”
Reuters (6/14, Paul) reports Sens. Josh Hawley (R-MO) and Richard Blumenthal (D-CT) “introduced legislation on Wednesday that would allow social media companies to be sued for spreading harmful material created with artificial intelligence.” The senators “announced the bill that would create an AI carve-out to Section 230, a law that shields internet companies from liability for the content posted to their platforms.” Reuters says it “would open the door for lawsuits to proceed against social media companies for claims based on emerging generative AI technology, including fabricated but strikingly realistic ‘deepfake’ photos and videos of real people.”
Education Week (6/14, Schwartz) reports new research shows that “receiving feedback from an AI observer prompts teachers to engage more deeply with students during class – leading them to more regularly acknowledge student contributions and encourage their questions.” The study “took place in an online course with adult learners,” though the researchers “say that the AI feedback tool could have a place in K-12 classrooms too, a way of providing more consistent feedback to teachers than one or two coaches would have time for.” While some organizations “are already piloting a similar strategy with tutors,” adapting the tool “used in this study for a school setting would require careful consideration about how it could be best deployed, and who would have access to observation data, said Dora Demszky, an assistant professor of education data science at Stanford’s Graduate School of Education, and the lead author on the paper.”
Politico (6/15, Chatterjee) interviews Senate Intelligence Chair Mark Warner, who on Thursday warned that “there’s a global race to build guardrails for how governments tap artificial intelligence – and China is setting the pace of development,” as the nation “has a variety of efforts in AI, and they have already actually moved even further than Europe in having specific legislation.” Warner further “said China presents a tough technological challenge for the U.S. ‘because it has such scale – both in terms of data and compute power – is a leading competitor in this field.” Additionally, the Senator “is worried the Chinese government will use AI on an ‘offensive basis, or on a misinformation and deceptive basis against the balance of the world.’”
Bloomberg (6/15, Subscription Publication) reports Wall Street’s embrace “of artificial intelligence poses acute risks to the US financial system and demands more congressional scrutiny, according to a key Democratic lawmaker.” Rep. Maxine Waters, “the top Democrat on the House Financial Services Committee, warned that financial firms’ use of the technology could lead to more discrimination in lending.” She also “called on the panel’s chairman, Republican Patrick McHenry, to urgently hold a hearing on generative AI, which creates content such as images and text based off a user’s prompt. Waters said those tools may lead to data leaks and the spread of misinformation.”
Education Week (6/15, Prothero) reports, following the emergence of ChatGPT and concerns about “how it could be used to supercharge plagiarism and other forms of student cheating,” that sense of panic has now “given way to an increasing number of teachers experimenting with how they can leverage AI in their classrooms.” To understand “how teachers are getting creative while also factoring in the potential downsides of this new technology, Education Week asked teachers to share what they are doing in a LinkedIn post. Education Week asked teachers if they had integrated AI tools or discussions about artificial intelligence into any lessons this school year – 40 percent said they had.” Among key themes about “how to use AI in education emerged from the LinkedIn survey,” some teachers are “encouraging students to use AI tools to help prepare for tests.” For example, one teacher posted on LinkedIn “that his students use the tool Class Companion to prepare for the AP History exam.”
Tech Times (6/15, Cruz) reported that “the application of machine learning in safety-critical autonomous systems, such as self-driving cars and power systems, poses unique risks to human safety.” In a recent study, “a groundbreaking research paper challenges the prevailing notion that an unlimited number of trials is necessary to learn safe actions in unfamiliar environments.” This study, led by Juan Andres Bazerque, an assistant professor in Electrical and Computer Engineering at the University of Pittsburgh, in collaboration with Enrique Mallada, an associate professor in ECE at Johns Hopkins University, “introduces a fresh approach to machine learning that prioritizes acquiring safe actions while striking a balance between optimality, encountering hazardous situations, and swiftly identifying unsafe acts.”
CNBC (6/19, Handley) reports AI technology “is likely to shake up the transportation industry – transforming how supply chains are managed and reducing the number of jobs carried out by people, according to analysts and industry insiders.” New technologies including “sidewalk robots, self-driving trucks and customer service bots are on their way, along with generative AI that can predict disruptions or explain why sales forecasts may have been missed, according to industry executives.” Morgan Stanley Analyst Ravi Shankar wrote in a May research note, “AI may be able to totally (or nearly) remove all human touchpoints in the supply chain including ‘back office’ tasks.” Shankar and his fellow analysts added, “The Freight Transportation space is on the cusp of a generational shift driven by disruptive technologies incl. Autonomous, EV, blockchain and drones. AI is the latest one of these potentially transformative technologies to emerge – and perhaps the most powerful to-date.”
The Washington Post (6/17, A1, Zakrzewski, Lima) reports “members of Congress and their staffs are seeking a crash course on AI” as they advance plans to regulate the technology. To address the “swiftly evolving” technology, lawmakers are “crowding into briefings with top industry executives, summoning leading academics for discussions and taking other steps to try to wrap their heads around the emerging field.” However, their “gaps in technical expertise have provided an opening for corporate interests,” with executives seeking to “develop AI without hindrance” hoping to influence policy.
Bloomberg (6/22, Subscription Publication) reports OpenAI CEO Sam Altman, speaking at the Bloomberg Technology Summit in San Francisco, said that although there are many ways AI “could go wrong,” the potential benefits outweigh the costs. Altman “spoke about several areas where AI could be beneficial, including medicine, science and education.” He said, “I think this will be the most important step yet that humanity has to get through with technology...And I really care about that.”
Analyst: US, China Tech Industry Tensions Likely To Expand Focus On Generative AI. CNBC (6/23, Kharpal) reports generative AI “could be the new battleground in the battle for tech supremacy between the U.S. and China, according to one analyst.” Albright Stonebridge technology policy lead Paul Triolo told CNBC, “There will likely be more attempts coming from Washington to target the development in China of some types of applications, and generative AI could be in the crosshairs in the coming year.” The increasing tension comes “as the Biden administration determines which technologies could benefit both China’s military modernization, and which could also boost Chinese companies’ ability to make breakthroughs in generative AI,” Triolo added.
Bloomberg (6/22, Subscription Publication) reports Amazon Web Services is investing $100 million to establish an AWS Generative AI Innovation Center “to help customers develop and deploy new kinds of artificial intelligence products.” The Center “will link customers with company experts in AI and machine learning.” They will “help a range of clients in health care, financial services and manufacturing build customized applications using the new technology.” AWS CEO Adam Selipsky said, “We will bring our internal AWS experts free-of-charge to a whole bunch of AWS customers, focusing on folks with significant AWS presence, and go help them turbocharge their efforts to get real with generative AI, get beyond the talk.”
Bloomberg (6/22, Kim, Subscription Publication) reports, “US lawmakers on Thursday questioned how new rules for artificial intelligence can protect against the technology’s risks without reinforcing the early advantages enjoyed by tech giants such as Microsoft and Google.” House Science, Space and Technology Committee Frank Lucas (R-OK) “opened Thursday’s hearing by recognizing the need for Congress to set some guardrails for AI while warning that any regulation must also promote innovation – especially as the US races China to develop machine learning capabilities.” During the hearing, “Shahin Farshchi, a general partner of venture capital firm Lux Capital, warned the panel that a few companies are already dominating a sector in which it can cost more than $100 million to train the most advanced generative AI models.”
Education Week (6/22, Klein) reports, “Richard Culatta, the CEO of the International Society for Technology in Education, feels like a powerful moment for educational technology has arrived.” This comes as ISTE’s annual convention “is set to start early next week,” and developments he previously pointed to regarding AI and universal internet connectivity “show more promise than ever, Culatta said.” In an interview with EdWeek, Culatta said ISTE’s “overall message on AI at this moment in time” is going to be “that the education community is largely focusing on the wrong things.” He added that “if you’re worried about cheating, the problem is you’re assessing [the wrong things],” and he urged teachers not to wait for AI to be “developed a bit more” before engaging with it.
Fortune (6/23, Prakash) reported one study suggests artificial intelligence, “which has seen a surge in attention following the introduction of generative A.I. tools like ChatGPT, will likely disproportionately impact women in the workplace.” According to a report by the University of North Carolina’s Kenan-Flagler Business School, 80 percent of working women “are engaged in occupations that are at risk of being disrupted by A.I. – versus 60% of men.” The report “uses Goldman Sachs research from March as a reference for the 15 occupations that will be most affected by A.I., including roles in management, engineering, and legal.” The findings show that “nearly two-thirds of jobs will be affected by generative A.I., with anywhere between 25% to 50% of the tasks in these jobs being exposed to automation. Goldman Sachs predicted earlier this year that nearly 300 million jobs globally could be impacted by A.I.”
CNBC (6/23, Anwah, Rosenbaum) reported, “In a signal of just how quickly and widely the artificial intelligence boom is spreading, nearly half of the companies (47%) surveyed by CNBC say that AI is their top priority for tech spending over the next year, and AI budgets are more than double the second-biggest spending area in tech, cloud computing, at 21%,” and Lilly Chief Information and Digital Officer Diogo Rau said, “It’s hard to think of an area that this couldn’t help.”
The Wall Street Journal (6/22, Toplensky, Subscription Publication) interviews Google Chief Sustainability Officer Kate Brandt on sustainability challenges and themes for the coming year, including the role of AI, which she says will allow a wide range of organizations to accelerate their climate efforts.
The Washington Post (6/22) reports, “Senate Majority Leader Charles E. Schumer (D-N.Y.) kick-started what he called an ‘all hands on deck’ effort to craft new rules for artificial intelligence on Wednesday, a sprawling and potentially lengthy legislative attempt.” The announcement “was light on details about what specific proposals may come out of it.” The Post adds that a bigger focus was on AI ‘innovation,’ rather than ‘risk’ or ‘harm,’ a focus “likely to be welcomed by business leaders, who are keen to keep boosting U.S. AI development, but is already miffing some consumer advocates.” Schumer “did not once explicitly mention efforts in the European Union to enact new AI rules,” but “did warn that if the United States does not ‘set the norms for AI’s proper uses, others will,’ singling out China in his remarks.”
AI’s Use In Election Ads Sparks Desire For Safeguards. The New York Times (6/25, Hsu, Myers) reports that “what began a few months ago” as a “slow drip” of fund-raising emails and “promotional images composed by A.I. for political campaigns has turned into a steady stream of campaign materials created by the technology, rewriting the political playbook for democratic elections around the world.” Increasingly, political consultants, “election researchers and lawmakers say setting up new guardrails, such as legislation reining in synthetically generated ads, should be an urgent priority.” Existing defenses, such as “social media rules and services that claim to detect A.I. content, have failed to do much to slow the tide,” and some politicians see artificial intelligence “as a way to help reduce campaign costs, by using it to create instant responses to debate questions or attack ads, or to analyze data that might otherwise require expensive experts.”
Sens. Peters And Markey Ask GAO For Detailed Assessment Of Dangers From AI. The Hill (6/23, Klar, Kagubare) reports that Sens. Gary Peters (D-MI) and Ed Markey (D-MA) have asked the Government Accountability Office (GAO) to “review the potential harms of generative artificial intelligence (AI).” They asked for a “detailed technology assessment.”
The Iowa Capital Dispatch (6/22, Strong) reported, “Three professors from Iowa’s public universities are working to raise awareness of the importance and contradictory nature of artificial intelligence in higher education, pointing to concerns about privacy, bias and academic integrity.” Speaking to the Board of Regents on June 14, they pointed “to the benefits and detriments of AI use in classrooms, as it is necessary for the workforce in some occupations and hinders others.” University of Iowa professor and associate dean of the Tippie College of Business, Barrett Thomas said, “It’s important that we are, in all cases, educating our faculty, staff and students on the use of these technologies, both from the perspective of the opportunity they offer, but also the challenges and concerns that they present.” He also agreed with an Iowa State University professor “about the detriments of the newer AI generator technology, including bias.”
The New York Times (6/26, Lohr) reports, “ChatGPT-style artificial intelligence is coming to health care, and the grand vision of what it could bring is inspiring. Every doctor, enthusiasts predict, will have a superintelligent sidekick, dispensing suggestions to improve care.” However, “first will come more mundane applications of artificial intelligence.” One “prime target will be to ease the crushing burden of digital paperwork that physicians must produce, typing lengthy notes into electronic medical records required for treatment, billing and administrative purposes.” For the time being, “new A.I. in health care is going to be less a genius partner than a tireless scribe.”
The Washington Post (6/26, Verma, Oremus) reports an artificial intelligence chatbot named Allie was “created for sexual play – which sometimes carries out graphic rape and abuse fantasies.” While firms like OpenAI, Microsoft, and Google “rigorously train their AI models to avoid a host of taboos, including overly intimate conversations, Allie was built using open-source technology – code that’s freely available to the public and has no such restrictions. Based on a model created by Meta, called LLaMA, Allie is part of a rising tide of specialized AI products anyone can build, from writing tools to chatbots to data analysis applications.” While advocates see open-source AI “as a way around corporate control,” critics worry “it could also enable fraud, cyber hacking and sophisticated propaganda campaigns.”
Insider (6/24, Chowdhury) reports that OpenAI CEO Sam Altman has been engaged in a “diplomatic mission” around the world that “[is] closer to a religious mission, one that’s seen Altman and his team whiz across 16 countries in the space of three months.” Insider says, “OpenAI’s goal is to cement AI’s future importance to humanity. According to people on the ground during Altman’s tour, he achieved what he set out to do.” Insider adds, “Altman’s sales pitch has rested on two central pillars. The first: AI will only work for you if you work with it. The second: AI is inevitable so get onboard before it’s too late.”
Axios (6/26, Solender) reports the House is placing new “guardrails around use of the popular AI chatbot ChatGPT by congressional offices” in the latest example of “how Washington is grappling with the implications of the recent explosive growth in generative AI both legislatively and personally.” In a memo to House “staffers on Monday morning, a copy of which was obtained by Axios, the chamber’s Chief Administrative Officer Catherine L. Szpindor wrote that offices are ‘only authorized’ to use the paid ChatGPT Plus.” In contrast to the free service, she said, the $20-per-month subscription version “incorporates important privacy features that are necessary to protect House data.” She also “said in addition to other versions of ChatGPT, no other large language models are authorized for use.”
The New York Times (6/26, Singer) reports tech industry hype “and doomsday prophesies around A.I.-enhanced chatbots like ChatGPT sent many schools scrambling this year to block or limit the use of the tools in classrooms.” Newark Public Schools “is taking a different approach.” It is “one of the first school systems in the United States to pilot test Khanmigo, an automated teaching aid developed by Khan Academy, an education nonprofit whose online lessons are used by hundreds of districts.” Newark “has essentially volunteered to be a guinea pig for public schools across the country that are trying to distinguish the practical use of new A.I.-assisted tutoring bots from their marketing promises.”
Education Week (6/26, Klein) reports educators are currently wondering how to “keep students from using ChatGPT and other AI tools to cheat on tests and other assignments,” as well as how to engage students in learning “when they have access to so much distracting technology.” Manhattan Beach High School professor Michael Hernandez said the answer for both questions is to “ditch traditional assessments and get kids engaged in critical thinking and storytelling that has a clear purpose behind it.” During the International Society for Technology in Education’s annual conference, Hernandez said storytelling “isn’t as literal as asking students to create their own short documentaries,” but that it can include digital books, podcast production, and infographics.
Bloomberg (6/27, Subscription Publication) reports the Chinese tech sector is now working to compete with companies such as Google and Microsoft in the new “global artificial intelligence race.” Local billionaires, engineers, and business veterans “alike now harbor a remarkably consistent ambition: to outdo China’s geopolitical rival in a technology that may determine the global power stakes.” It is expected this diffused collection of competitors are expected to “propel some $15 billion of spending on AI technology this year.” There is also a general feeling among the industry in China that the Chinese Communist Party will be friendly to their efforts in an attempt to stay competitive with its geopolitical rival – the US.
Politico (6/27, Chatterjee) reports that to ensure the “Pentagon is keeping pace with its adversaries, Sens. Mark Warner (D-Va.), Michael Bennet (D-Colo.) and Todd Young (R-Ind.) introduced a bill this month to analyze how the U.S. is faring on key technologies like AI relative to the competition.” The Senate version of the “2024 NDAA would create a prize competition to detect and tag content produced by generative AI, a key DOD concern because of the potential for AI to generate misleading but convincing deep fakes.” It also directs “the Pentagon to develop AI tools to monitor and assess information campaigns, which could help the military track disinformation networks and better understand how information spreads in a population.” Other proposals include standing up an “entirely new office dedicated to autonomous systems,” and in the House version of the bill, provisions include “an analysis of human-machine interface technologies that would set the stage for [the] proposed office” and developing a “process to determine what responsible AI use looks like for the Pentagon’s widespread AI stakeholders — including all the military forces and combatant commands.”
Bloomberg (6/27, Li, Subscription Publication) reports the state of New York “is seeking to procure a supercomputer to run artificial intelligence systems and gain a deeper understanding of the technology for more effective regulation.” New York Department of Financial Services Superintendent Adrienne Harris, speaking at the Point Zero Forum for financial regulation, said, “My vision is for DFS and other regulators to become the regulator of the future, meaning that we are embracing reg tech to the public advantage, using data driven approaches that leverage data analytics to enhance our ability to predict and respond to events in the marketplace.” She added, “All of the major players in private tech and software are moving toward AI...That’s no secret, but it would be a huge missed opportunity for regulators to not make use of these tools as well.”
Inside Higher Ed (6/29, Moody) reports last month, Wells College President Jonathan Gibralter delivered a commencement address that “hit the usual themes: be prepared for challenges and setbacks, cultivate perseverance, and embrace opportunities.” However, before he ended the address, Gibralter revealed he’d asked ChatGPT “to write a commencement address from the president of Wells College to the graduates.” He then challenged the graduates “to stay intellectually curious.” In an interview with Inside Higher Ed, Gibralter explained he’d “sought to emphasize the importance of forging human connections – and the value of a liberal arts education – in an increasingly technologically driven world.” Other college presidents, such as those at Northeastern University and Michigan State, have also used ChatGPT for a commencement speech this season.
Engadget (6/28, Shanklin) reports Microsoft announced a new AI certification program that “will offer free coursework through LinkedIn” and “provide tips for composing the most effective prompts while showing beginners the ropes, giving them a chance to keep pace with our rapidly changing world.” The AI Skills Initiative under Microsoft’s Skills for Jobs program “will include free courses created by (Microsoft subsidiary) LinkedIn, offering learners ‘the first Professional Certificate on Generative AI in the online learning market.’”
Vox (6/28, Molla) reports as tech workers deal with “pay stagnation, layoffs, and generally less demand for their skills than they’d enjoyed for the past decade, the artificial intelligence specialist has become the new ‘it’ girl in Silicon Valley.” While tech companies and investors “pull back seemingly everywhere else in tech, money is still flowing into AI, which the industry sees as the next big thing. That’s meant outsize demand, pay, and perks for people who can facilitate that kind of work.” This attracts people “who’ve recently been laid off in tech or who worry that their tech jobs don’t have the upward mobility they used to,” and those in adjacent tech careers are now “attempting to reposition themselves where the good jobs are.”
The Washington Post (6/28, Abril) reports new Gen Z college graduates “may be the most prepared to champion and use generative artificial intelligence at work.” For months, many of these individuals “have been exploring the technology’s capabilities, sharpening their skills and learning how to best apply it to their tasks at hand. And while some are cautious about AI’s potential harms, many are more fascinated than they are worried about the technology.” This comes as generative AI is being integrated “into workplace tools like email providers, graphics editors, productivity tools and coding programs.” Despite some leaders, “including AI creators, warning about doomsday scenarios in which the tech takes over humanity, hundreds of thousands of Gen Z students – those born between 1997 and 2012 – have experimented with it, and in some cases, have even been encouraged by their schools to explore it.”
Bloomberg (6/28, Xie, Poritz, Subscription Publication) reports “a group of anonymous individuals” filed for class action status in a suit against OpenAI over the company’s use of personal data to train its large language models for generative AI. The lawsuit claims OpenAI has lifted from “books, articles, websites and posts – including personal information obtained without consent,” even pointing to the possibility of “civilizational collapse” from OpenAI’s practices. In the suit, the plaintiffs “cite $3 billion in potential damages, based on a category of harmed individuals they estimate to be in the millions.”
The Washington Post (6/28, De Vynck) reports, “The lawsuit seeks to test out a novel legal theory – that OpenAI violated the rights of millions of internet users when it used their social media comments, blog posts, Wikipedia articles and family recipes.” Filed in the Northern District of California on Wednesday by Clarkson, “the lawsuit goes to the heart of a major unresolved question hanging over the surge in ‘generative’ AI tools such as chatbots and image generators.” Besides OpenAI, “Google, Facebook, Microsoft and a growing number of other companies” also scrape information from the internet, “but Clarkson decided to go after OpenAI because of its role in spurring its bigger rivals to push out their own AI when it captured the public’s imagination with ChatGPT last year, Clarkson said.”
Inside Higher Ed (6/30, Coffey) reports Harvard University’s flagship computer science course “is now using ChatGPT as a way of freeing up teaching assistants to spend more quality time with students.” The course “rolled out AI as a tool in its summer program about two weeks ago,” with Harvard using the technology “to help computer science students understand highlighted lines of code and advise them on why and how to improve their code’s style.” Future features “emphasize nudging students in the right direction by asking rhetorical questions, much like a TA would, versus outright telling students the problems with their code.”
CNBC (6/29, Field) reports, “The first drug fully generated by artificial intelligence entered into clinical trials with human patients this week.” A Hong-Kong-based startup “created the drug, INS018_055, as a treatment for idiopathic pulmonary fibrosis, or IPF, a chronic disease that causes scarring in the lungs.” Insilico Medicine Founder and CEO Alex Zhavoronkov said, “While there are other AI-designed drugs in trials, ours is the first drug with both a novel AI-discovered target and a novel AI-generated design.”
Reuters (6/29) reports generative AI raises competition concerns and “is a focus of the Federal Trade Commission’s Bureau of Technology along with its Office of Technology, the agency said in a blog post by the staff of the two offices.” In a blog post, staff write that, “Generative AI depends on a set of necessary inputs. If a single company or a handful of firms control one or several of these essential inputs, they may be able to leverage their control to dampen or distort competition in generative AI markets.” The post identified the “inputs as big datasets when the technology is being developed, a well-trained engineering and research workforce, and computational power with specialized chips like graphical processing units.”
Bennet Urges Tech CEOs To Label, Set Standards For AI-Generated Content. Reuters (6/29, Bartz) reports Sen. Michael Bennet (D-CO), who is “active in artificial-intelligence issues, wrote to leading tech firms on Thursday to urge them to label AI-generated content and limit the spread of material aimed at misleading users.” In the letter to the CEOs of OpenAI, Microsoft and others, Bennet said, “Fabricated images can derail stock markets, suppress voter turnout, and shake Americans’ confidence in the authenticity of campaign material. Continuing to produce and disseminate AI-generated content without clear, easily comprehensible identifiers poses an unacceptable risk to public discourse and electoral integrity.” In his letter, Bennet “asked the executives to answer a series of questions by July 31, including what standards or requirements they employ to identify AI content and how those standards were developed and audited to establish effectiveness.” He also “asked what happens to users who violate the rules.”
Inside Higher Ed (6/30, Coffey) reported Harvard University’s flagship computer science course “is now using ChatGPT as a way of freeing up teaching assistants to spend more quality time with students.” Harvard’s Computer Science 50: Introduction to Computer Science “rolled out AI as a tool in its summer program about two weeks ago. The popular course has about 70 students this summer and will have more than 600 in the fall.” Building upon ChatGPT, “Harvard is using the technology to help computer science students understand highlighted lines of code and advise them on why and how to improve their code’s style. It is also used to answer frequently asked questions.”
The Wall Street Journal (6/30, Mims, Subscription Publication) reported on how the introduction of artificial intelligence software is impacting the software industry. The journal said companies are using AI tools as a way to save money on paying programmers, and added that it could signal a similar approach to how companies use the tools to replace other white-collar jobs.
The AP (6/30, Bonnell) reported more than 150 tech industry executives “are urging the European Union to rethink the world’s most comprehensive rules for artificial intelligence, saying Friday that upcoming regulations will make it harder for companies in Europe to compete with rivals overseas, especially when it comes to the technology behind systems like ChatGPT.” TechCrunch (6/30, Butcher) reports the executives “highlighted the risks of tight regulation, saying the rules could threaten the ability of European companies to compete in AI, while also failing to deal with the potential challenges.”
The Verge (6/30, Weatherbed) reported the companies “flagged” as one of their “major concerns” about the EU’s Artificial Intelligence Act “the legislation’s strict rules specifically targeting generative AI systems, a subset of AI models that typically fall under the ‘foundation model’ designation.” The Verge explained under the AI Act, “providers of foundation AI models – regardless of their intended application – will have to register their product with the EU, undergo risk assessments, and meet transparency requirements, such as having to publicly disclose any copyrighted data used to train their models.”
In his tech newsletter for The New York Times (6/30), Brian Chen wrote that generative AI’s “specialty is language – guessing which word comes next – and students quickly realized that they could use ChatGPT and other chatbots to write essays.” However, it’s easy “to get caught cheating with generative A.I. because it is prone to making stuff up, a phenomena known as ‘hallucinating.’” Chen said the AI can “also be used as a study assistant,” though when studying, “it’s paramount that the information is correct, and to get the most accurate results, you should direct A.I. tools to focus on information from trusted sources rather than pull data from across the web.” This can be done with some AI tools like Humata. AI, Wordtune Read, “and various plug-ins inside ChatGPT, [which] act as research assistants that will summarize documents for you.”