Research – Cornell Tech https://tech.cornell.edu Mon, 12 Jun 2023 22:55:37 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 https://tech.cornell.edu/wp-content/uploads/2019/09/T_Filled_Cornell-Red-favicon-100x100.png Research – Cornell Tech https://tech.cornell.edu 32 32 Google, Cornell to Partner in Online Security Initiative https://tech.cornell.edu/news/google-cornell-to-partner-in-online-security-initiative/ https://tech.cornell.edu/news/google-cornell-to-partner-in-online-security-initiative/#respond Mon, 12 Jun 2023 20:00:57 +0000 https://tech.cornell.edu/?p=26474 By Tom Fleischman, Cornell Chronicle Cornell is one of four higher-education institutions in a new partnership with Google aimed at establishing New York City as the world leader in cybersecurity. On June 12, Google announced the Google Cyber NYC Institutional Research Program to jump-start the cybersecurity ecosystem, allocating $12 million to the four institutions, which […]

The post Google, Cornell to Partner in Online Security Initiative appeared first on Cornell Tech.

]]>
By Tom Fleischman, Cornell Chronicle

Cornell is one of four higher-education institutions in a new partnership with Google aimed at establishing New York City as the world leader in cybersecurity.

On June 12, Google announced the Google Cyber NYC Institutional Research Program to jump-start the cybersecurity ecosystem, allocating $12 million to the four institutions, which include a team led by Cornell Tech and the Cornell Ann S. Bowers College of Computing and Information Science.

“Making systems safe, secure and trustworthy is incredibly hard, and it’s only going to become harder in the age of AI. Cornell is grateful and eager to take on this challenge with Google and with our colleagues across New York,” said Greg Morrisett, the Jack and Rilla Neafsey Dean and Vice Provost of Cornell Tech and principal investigator (PI) for Cornell. “We believe New York will be the epicenter for next-generation research, students and startups in cyber, trust and safety, and we applaud Google for its leadership and unwavering support for Cornell University.”

Cornell – along with the City University of New York, Columbia University’s Fu Foundation School of Engineering and Applied Science and New York University’s Tandon School of Engineering – will receive $1 million each in annual funding through 2024 (with the option to continue through 2025). The funding will support approximately 90 collaborative research projects across the four institutions in areas where further research could encourage the development of more secure digital ecosystems and inspire innovation.

While most current security-related research is focused on technical challenges, many of the most significant security failures involve humans and can often be attributed to poor design that fails to take the human factor into account. This partnership will use an interdisciplinary approach to build better foundations for secure systems and ensure that they are deployed in ways that address rather than exacerbate societal problems.

The partnership – part of a $10 billion cybersecurity initiative that Google announced in 2021 – was announced in a “fireside chat” with the deans of the four institutions along with Phil Venables, Google Cloud chief information security officer.

“The Google Cyber NYC Institutional Research Program will further propel New York as a research leader in cybersecurity, alongside the work of preeminent city institutions like New York City Cyber Command,” Venables said. “At Google, we’re committed to being bold and responsible stewards of emerging technology like AI, so we’re working together with four of New York’s leading institutions to make sure the city is prepared as the threat landscape continually shifts.”

“Cornell’s commitment both to developing state-of-the-art computing and information technologies, and to understanding the societal and human impact of these technologies, make us particularly well positioned to partner with Google,” said Kavita Bala, dean of Cornell Bowers CIS. “We look forward to building upon our longstanding excellence in computer security to address one of the biggest challenges of our time – to create a more secure digital environment for all.”

Nate Foster, professor of computer science at Cornell Bowers CIS, and Thomas Ristenpart, associate professor of computer science at Cornell Tech and at Cornell Bowers CIS, will act as co-PIs for the partnership.

In addition to supporting research projects, the funding also will help grow the institutions’ respective cybersecurity degree programs; increase the number of qualified security professionals entering the workforce; and address diversity gaps in the cybersecurity industry by recruiting and developing workers from underrepresented groups.

Partnership between Cornell and Google is not new: The Cornell Tech campus got its start in 2012 in Google’s New York City offices, on Eighth Avenue, before moving to its state-of-the-art Roosevelt Island campus in 2017. For more than two decades Google has funded Cornell researchers across the Ithaca, Cornell Tech and Weill Cornell Medicine campuses, and funded key initiatives including CSMore and SoNIC that foster diversity and inclusion in the fields of computing and information science.

This story originally appeared in the Cornell Chronicle.

The post Google, Cornell to Partner in Online Security Initiative appeared first on Cornell Tech.

]]>
https://tech.cornell.edu/news/google-cornell-to-partner-in-online-security-initiative/feed/ 0
Writing With AI Help Can Shift Your Opinions https://tech.cornell.edu/news/writing-with-ai-help-can-shift-your-opinions/ https://tech.cornell.edu/news/writing-with-ai-help-can-shift-your-opinions/#respond Mon, 15 May 2023 18:50:08 +0000 https://tech.cornell.edu/?p=26328 By Patricia Waldron, Cornell Ann S. Bowers College of Computing and Information Science Artificial intelligence-powered writing assistants that autocomplete sentences or offer “smart replies” not only put words into people’s mouths, they also put ideas into their heads, according to new research. Maurice Jakesch, a doctoral student in the field of information science asked more […]

The post Writing With AI Help Can Shift Your Opinions appeared first on Cornell Tech.

]]>
By Patricia Waldron, Cornell Ann S. Bowers College of Computing and Information Science

Artificial intelligence-powered writing assistants that autocomplete sentences or offer “smart replies” not only put words into people’s mouths, they also put ideas into their heads, according to new research.

Maurice Jakesch, a doctoral student in the field of information science asked more than 1,500 participants to write a paragraph answering the question, “Is social media good for society?” People who used an AI writing assistant that was biased for or against social media were twice as likely to write a paragraph agreeing with the assistant, and significantly more likely to say they held the same opinion, compared with people who wrote without AI’s help.

The study suggests that the biases baked into AI writing tools – whether intentional or unintentional – could have concerning repercussions for culture and politics, researchers said.

“We’re rushing to implement these AI models in all walks of life, but we need to better understand the implications,” said co-author Mor Naaman, professor at the Jacobs Technion-Cornell Institute at Cornell Tech and of information science in the Cornell Ann S. Bowers College of Computing and Information Science. “Apart from increasing efficiency and creativity, there could be other consequences for individuals and also for our society – shifts in language and opinions.”

While others have looked at how large language models such as ChatGPT can create persuasive ads and political messages, this is the first study to show that the process of writing with an AI-powered tool can sway a person’s opinions. Jakesch presented the study, “Co-Writing with Opinionated Language Models Affects Users’ Views,” at the 2023 CHI Conference on Human Factors in Computing Systems in April, where the paper received an honorable mention.

To understand how people interact with AI writing assistants, Jakesch steered a large language model to have either positive or negative opinions of social media. Participants wrote their paragraphs – either alone or with one of the opinionated assistants – on a platform he built that mimics a social media website. The platform collects data from participants as they type, such as which of the AI suggestions they accept and how long they take to compose the paragraph.

People who co-wrote with the pro-social media AI assistant composed more sentences arguing that social media is good, and vice versa, compared to participants without a writing assistant, as determined by independent judges. These participants also were more likely to profess their assistant’s opinion in a follow-up survey.

The researchers explored the possibility that people were simply accepting the AI suggestions to complete the task quicker. But even participants who took several minutes to compose their paragraphs came up with heavily influenced statements. The survey revealed that a majority of the participants did not even notice the AI was biased and didn’t realize they were being influenced.

“The process of co-writing doesn’t really feel like I’m being persuaded,” said Naaman. “It feels like I’m doing something very natural and organic – I’m expressing my own thoughts with some aid.”

When repeating the experiment with a different topic, the research team again saw that participants were swayed by the assistants. Now, the team is looking into how this experience creates the shift, and how long the effects last.

Just as social media has changed the political landscape by facilitating the spread of misinformation and the formation of echo chambers, biased AI writing tools could produce similar shifts in opinion, depending on which tools users choose. For example, some organizations have announced they plan to develop an alternative to ChatGPT, designed to express more conservative viewpoints.

These technologies deserve more public discussion regarding how they could be misused and how they should be monitored and regulated, the researchers said.

“The more powerful these technologies become and the more deeply we embed them in the social fabric of our societies,” Jakesch said, “the more careful we might want to be about how we’re governing the values, priorities and opinions built into them.”

Advait Bhat from Microsoft Research, Daniel Buschek of the University of Bayreuth and Lior Zalmanson of Tel Aviv University contributed to the paper.

Support for the work came from the National Science Foundation, the German National Academic Foundation and the Bavarian State Ministry of Science and the Arts.

Patricia Waldron is a writer for the Cornell Ann S. Bowers College of Computing and Information Science.

This story originally appeared in the Cornell Chronicle.

The post Writing With AI Help Can Shift Your Opinions appeared first on Cornell Tech.

]]>
https://tech.cornell.edu/news/writing-with-ai-help-can-shift-your-opinions/feed/ 0
(Almost) Everyone Likes a Helpful Trash Robot https://tech.cornell.edu/news/almost-everyone-likes-a-helpful-trash-robot/ https://tech.cornell.edu/news/almost-everyone-likes-a-helpful-trash-robot/#respond Fri, 21 Apr 2023 16:32:56 +0000 https://tech.cornell.edu/?p=26278 By Patricia Waldron, Cornell Ann S. Bowers College of Computing and Information Science How do New Yorkers react to robots that approach them in public looking for trash? Surprisingly well, actually. Cornell researchers built and remotely controlled two trash barrel robots – one for landfill waste and one for recycling – at a plaza in […]

The post (Almost) Everyone Likes a Helpful Trash Robot appeared first on Cornell Tech.

]]>
By Patricia Waldron, Cornell Ann S. Bowers College of Computing and Information Science

How do New Yorkers react to robots that approach them in public looking for trash? Surprisingly well, actually.

Cornell researchers built and remotely controlled two trash barrel robots – one for landfill waste and one for recycling – at a plaza in Manhattan to see how people would respond to the seemingly autonomous robots. Most people welcomed them and happily gave them trash, though a minority found them to be creepy. The researchers now have plans to see how other communities behave. If you’re a resident of New York City, these trash barrel robots may be coming soon to a borough near you.

A team led by Wendy Ju, associate professor at the Jacobs Technion-Cornell Institute at Cornell Tech and the Technion, and a member of the Department of Information Science in the Cornell Ann S. Bowers College of Computing and Information Science, constructed the robots from a blue or gray barrel mounted on recycled hoverboard parts. They equipped the robots with a 360-degree camera and operated them using a joystick.

“The robots drew significant attention, promoting interactions with the systems and among members of the public,” said co-author Frank Bu, a doctoral student in the field of computer science. “Strangers even instigated conversations about the robots and their implications.”

Bu and Ilan Mandel, a doctoral student in the field of information science, presented the study, “Trash Barrel Robots in the City” in the video program at the ACM/IEEE International Conference on Human-Robot Interaction last month.

In the video footage and interviews, people expressed appreciation for the service the robots provided and were happy to help move them when they got stuck, or to clear away chairs and other obstacles. Some people summoned the robot when they had trash – waving it like a treat for a dog – and others felt compelled to “feed” the robots waste when they approached.

However, several people voiced concerns about the cameras and public surveillance. Some raised middle fingers to the robots and one person even knocked one over.

People tended to assume that the robots were “buddies” who were working together, and some expected them to race each other for the trash. As a result, some people threw their trash into the wrong barrel.

Researchers call this type of research, in which a robot appears autonomous but people are controlling it from behind the scenes, a Wizard of Oz experiment. It’s helpful during prototype development because it can flag potential problems robots are likely to encounter when interacting with humans in the wild.

Ju had previously deployed a trash barrel robot on the Stanford University campus, where people had similarly positive interactions. In New York City, initially she had envisioned new types of mobile furniture, such as chairs and coffee tables.

“When we shared with them the trash barrel videos that we had done at Stanford, all discussions of the chairs and tables were suddenly off the table,” Ju said. “It’s New York! Trash is a huge problem!”

Now, Ju and her team are expanding their study to encompass other parts of the city. “Everyone is sure that their neighborhood behaves very differently,” Ju said. “So, the next thing that we’re hoping to do is a five boroughs trash barrel robot study.” Michael Samuelian, director of the Urban Tech hub at Cornell Tech, has helped the team to make contact with key partners throughout the city for the next phase of the project.

Doctoral student Wen-Ying “Rei” Lee also contributed to the study.

Patricia Waldron is a writer for the Cornell Ann S. Bowers College of Computing and Information Science.

This story originally appeared in the Cornell Chronicle.

The post (Almost) Everyone Likes a Helpful Trash Robot appeared first on Cornell Tech.

]]>
https://tech.cornell.edu/news/almost-everyone-likes-a-helpful-trash-robot/feed/ 0
AI Tool Gains Doctors’ Trust by Giving Advice Like a Colleague https://tech.cornell.edu/news/ai-tool-gains-doctors-trust-by-giving-advice-like-a-colleague/ https://tech.cornell.edu/news/ai-tool-gains-doctors-trust-by-giving-advice-like-a-colleague/#respond Wed, 05 Apr 2023 20:56:43 +0000 https://tech.cornell.edu/?p=26214 By Patricia Waldron, Cornell Ann S. Bowers College of Computing and Information Science Hospitals have begun using “decision support tools” powered by artificial intelligence that can diagnose disease, suggest treatment or predict a surgery’s outcome. But no algorithm is correct all the time, so how do doctors know when to trust the AI’s recommendation? A […]

The post AI Tool Gains Doctors’ Trust by Giving Advice Like a Colleague appeared first on Cornell Tech.

]]>
By Patricia Waldron, Cornell Ann S. Bowers College of Computing and Information Science

Hospitals have begun using “decision support tools” powered by artificial intelligence that can diagnose disease, suggest treatment or predict a surgery’s outcome. But no algorithm is correct all the time, so how do doctors know when to trust the AI’s recommendation?

A new study led by Qian Yang, assistant professor of information science in the Cornell Ann S. Bowers College of Computing and Information Science, suggests that if AI tools can counsel the doctor like a colleague – pointing out relevant biomedical research that supports the decision – then doctors can better weigh the merits of the recommendation.

The researchers will present the new study, “Harnessing Biomedical Literature to Calibrate Clinicians’ Trust in AI Decision Support Systems,” in April at the Association for Computing Machinery CHI Conference on Human Factors in Computing Systems.

Previously, most AI researchers have tried to help doctors evaluate suggestions from decision support tools by explaining how the underlying algorithm works, or what data was used to train the AI. But an education in how AI makes its predictions wasn’t sufficient, Yang said. Many doctors wanted to know if the tool had been validated in clinical trials, which typically does not happen with these tools.

“A doctor’s primary job is not to learn how AI works,” Yang said. “If we can build systems that help validate AI suggestions based on clinical trial results and journal articles, which are trustworthy information for doctors, then we can help them understand whether the AI is likely to be right or wrong for each specific case.”

To develop this system, the researchers first interviewed nine doctors across a range of specialties, and three clinical librarians. They discovered that when doctors disagree on the right course of action, they track down results from relevant biomedical research and case studies, taking into account the quality of each study and how closely it applies to the case at hand.

Yang and her colleagues built a prototype of their clinical decision tool that mimics this process by presenting biomedical evidence alongside the AI’s recommendation. They used GPT-3 to find and summarize relevant research. (ChatGPT is the better-known offshoot of GPT-3, which is tailored for human dialogue.)

“We built a system that basically tries to recreate the interpersonal communication that we observed when the doctors give suggestions to each other, and fetches the same kind of evidence from clinical literature to support the AI’s suggestion,” Yang said.

The interface for the decision support tool lists patient information, medical history and lab test results on one side, with the AI’s personalized diagnosis or treatment suggestion on the other, followed by relevant biomedical studies. In response to doctor feedback, the researchers added a short summary for each study, highlighting details of the patient population, the medical intervention and the patient outcomes, so doctors can quickly absorb the most important information.

The research team developed prototype decision support tools for three specialities – neurology, psychiatry and palliative care – and asked three doctors from each speciality to test out the prototype by evaluating sample cases.

In interviews, doctors said they appreciated the clinical evidence, finding it intuitive and easy to understand, and preferred it to an explanation of the AI’s inner workings.

“It’s a highly generalizable method,” Yang said. This type of approach could work for all medical specialties and other applications where scientific evidence is needed, such as Q&A platforms to answer patient questions or even automated fact checking of health-related news stories. “I would hope to see it embedded in different kinds of AI systems that are being developed, so we can make them useful for clinical practice.”

Co-authors on the study include doctoral students Yiran Zhao and Stephen Yang in the field of information science, and Yuexing Hao in the field of human behavior design. Volodymyr Kuleshov, assistant professor at the Jacobs Technion-Cornell Institute at Cornell Tech and in computer science in Cornell Bowers CIS, Fei Wang, associate professor of population health sciences at Weill Cornell Medicine, and Kexin Quan of the University of California, San Diego also contributed to the study.

The researchers received support from the AI2050 Early Career Fellowship and the Cornell and Weill Cornell Medicine’s Multi-Investigator Seed Grants.

Patricia Waldron is a writer for the Cornell Ann S. Bowers College of Computing and Information Science.

This story originally appeared in the Cornell Chronicle.

The post AI Tool Gains Doctors’ Trust by Giving Advice Like a Colleague appeared first on Cornell Tech.

]]>
https://tech.cornell.edu/news/ai-tool-gains-doctors-trust-by-giving-advice-like-a-colleague/feed/ 0
Cornell Tech Faculty Win Test of Time Award at CCS 2022 https://tech.cornell.edu/news/cornell-tech-faculty-win-test-of-time-award-at-ccs-2022/ https://tech.cornell.edu/news/cornell-tech-faculty-win-test-of-time-award-at-ccs-2022/#respond Thu, 15 Dec 2022 13:58:56 +0000 https://tech.cornell.edu/?p=25769 Ari Juels and Thomas Ristenpart were recognized for their co-authorship of a 2012 research paper that had a long-lasting influence and significant impact on security systems and privacy New York, NY (December 15) – Cornell Tech faculty members Weill Family Foundation and Joan and Sanford I. Weill Professor Ari Juels and Associate Professor of Computer […]

The post Cornell Tech Faculty Win Test of Time Award at CCS 2022 appeared first on Cornell Tech.

]]>
Ari Juels and Thomas Ristenpart were recognized for their co-authorship of a 2012 research paper that had a long-lasting influence and significant impact on security systems and privacy

New York, NY (December 15) – Cornell Tech faculty members Weill Family Foundation and Joan and Sanford I. Weill Professor Ari Juels and Associate Professor of Computer Science Thomas Ristenpart were the recipients of the Test of Time Award at the ACM Conference on Computer and Communications Security (CCS) for their co-authored 2012 paper, “Cross-VM side channels and their use to extract private keys.”

The CCS Test of Time Award recognizes papers that report research with long-lasting influence and significant impact on one or multiple subareas of systems security and privacy, through opening new research directions, proposing new technologies, or making new discoveries to create a better understanding of security risks.

The paper, co-authored by Yinqian Zhang and Michael K. Reiter, successfully demonstrated a novel cybersecurity attack method against virtualized computing environments. To do so, the research team examined a software-enabled process to divide a single physical computer into multiple virtual computers – called virtual machines – to add computing power and maximize cost-effectiveness.  This is a common practice in almost all computing environments, from laptops to cloud servers.

The attack method used by the researchers and detailed in their paper is known as a “side-channel attack,” a technique that exploits sensitive information that is mistakenly leaked by poorly configured systems. In a first-of-its-kind demonstration, the team was able to construct a sophisticated side channel attack to gather sensitive data leaked by one virtual machine and weaponize it against another. The successful attack yielded a software key that unlocked encrypted files stored in that virtual environment and showcased the dangers involved with this type of software.

Juels is a Professor at the Jacobs Technion-Cornell Institute at Cornell Tech and the Technion. He is also a member of the Computer Science field at Cornell University. His interests span a broad range of topics in computer security, cryptography, and privacy, including cloud security, financial cryptography, cybersecurity, user authentication, medical-device security, biometrics, and security and privacy for the Internet of Things.

Ristenpart serves as an Associate Professor at Cornell Tech and in the Department of Computer Science at Cornell University. His research is in computer security, with recent topics including cloud computing security, applied and theoretical cryptography, and privacy.

The post Cornell Tech Faculty Win Test of Time Award at CCS 2022 appeared first on Cornell Tech.

]]>
https://tech.cornell.edu/news/cornell-tech-faculty-win-test-of-time-award-at-ccs-2022/feed/ 0
Machine Learning Gives Nuanced View of Alzheimer’s Stages https://tech.cornell.edu/news/machine-learning-gives-nuanced-view-of-alzheimers-stages/ https://tech.cornell.edu/news/machine-learning-gives-nuanced-view-of-alzheimers-stages/#respond Wed, 30 Nov 2022 20:10:58 +0000 https://tech.cornell.edu/?p=25624 By David Nutt, Cornell Chronicle A Cornell-led collaboration used machine learning to pinpoint the most accurate means, and timelines, for anticipating the advancement of Alzheimer’s disease in people who are either cognitively normal or experiencing mild cognitive impairment. The modeling showed that predicting the future decline into dementia for individuals with mild cognitive impairment is easier […]

The post Machine Learning Gives Nuanced View of Alzheimer’s Stages appeared first on Cornell Tech.

]]>
By David Nutt, Cornell Chronicle

A Cornell-led collaboration used machine learning to pinpoint the most accurate means, and timelines, for anticipating the advancement of Alzheimer’s disease in people who are either cognitively normal or experiencing mild cognitive impairment.

The modeling showed that predicting the future decline into dementia for individuals with mild cognitive impairment is easier and more accurate than it is for cognitively normal, or asymptomatic, individuals. At the same time, the researchers found that the predictions for cognitively normal subjects is less accurate for longer time horizons, but for individuals with mild cognitive impairment, the opposite is true.

The modeling also demonstrated that magnetic resonance imaging (MRI) is a useful prognostic tool for people in both stages, whereas tools that track molecular biomarkers, such as positron emission tomography (PET) scans, are more useful for people experiencing mild cognitive impairment.

The team’s paper, “Machine Learning Based Multi-Modal Prediction of Future Decline Toward Alzheimer’s Disease: An Empirical Study,” published Nov. 16 in PLOS ONE. The lead author is Batuhan Karaman, a doctoral student in the field of electrical and computer engineering.

Alzheimer’s disease can take years, sometimes decades, to progress before a person exhibits symptoms. Once diagnosed, some individuals decline rapidly but others can live with mild symptoms for years, which makes forecasting the rate of the disease’s advancement a challenge.

“When we can confidently say someone has dementia, it is too late. A lot of damage has already happened to the brain, and it’s irreversible damage,” said senior author Mert Sabuncu, associate professor of electrical and computer engineering in the College of Engineering and Cornell Tech, and of electrical engineering in radiology at Weill Cornell Medicine.

“We really need to be able to catch Alzheimer’s disease early on,” Sabuncu said, “and be able to tell who’s going to progress fast and who’s going to progress slower, so that we can stratify the different risk groups and be able to deploy whatever treatment options we have.”

Clinicians often focus on a single “time horizon” – usually three or five years – to predict Alzheimer’s progression in a patient. The timeframe can seem arbitrary, according to Sabuncu, whose lab specializes in analysis of biomedical data – particularly imaging data, with an emphasis on neuroscience and neurology.

Sabuncu and Karaman partnered with longtime collaborator and co-author Elizabeth Mormino of Stanford University to use neural-network machine learning that could analyze five years’ worth of data about individuals who were either cognitively normal or had mild cognitive impairment. The data, captured in a study by the Alzheimer’s Disease Neuroimaging Initiative, encompassed everything from an individual’s genetic history to PET and MRI scans.

“What we were really interested in is, can we look at these data and tell whether a person will progress in upcoming years ?” Sabuncu said. “And importantly, can we do a better job in forecasting when we combine all the follow-up datapoints we have on individual subjects?”

The researchers discovered several notable patterns. For example, predicting a person will move from being asymptomatic to exhibiting mild symptoms is much easier for a time horizon of one year, compared to five years. However, predicting if someone will decline from mild cognitive impairment into Alzheimer’s dementia is most accurate on a longer timeline, with the “sweet spot” being about four years.

“This could tell us something about the underlying disease mechanism, and how temporally it is evolving, but that’s something we haven’t probed yet,” Sabuncu said.

Regarding the effectiveness of different types of data, the modeling showed that MRI scans are most informative for asymptomatic cases and are particularly helpful for predicting if someone’s going to develop symptoms over the next three years, but less helpful for forecasting for people with mild cognitive impairment. Once a patient has developed mild cognitive impairment, PET scans, which measure certain molecular markers such as the proteins amyloid and tau, appear to be more effective.

One advantage of the machine learning approach is that neural networks are flexible enough that they can function despite missing data, such as patients who may have skipped an MRI or PET scan.

In future work, Sabuncu plans to modify the modeling further so that it can process complete imaging or genomic data, rather than just summary measurements, to harvest more information that will boost predictive accuracy.

The research was supported by the National Institutes of Health National Library of Medicine and National Institute on Aging, and the National Science Foundation.

Many Weill Cornell Medicine physicians and scientists maintain relationships and collaborate with external organizations to foster scientific innovation and provide expert guidance. The institution makes these disclosures public to ensure transparency. For this information, see profile for Dr. Sabuncu.

This story originally appeared in the Cornell Chronicle.

The post Machine Learning Gives Nuanced View of Alzheimer’s Stages appeared first on Cornell Tech.

]]>
https://tech.cornell.edu/news/machine-learning-gives-nuanced-view-of-alzheimers-stages/feed/ 0
Programming Tool Turns Handwriting Into Computer Code https://tech.cornell.edu/news/programming-tool-turns-handwriting-into-computer-code/ https://tech.cornell.edu/news/programming-tool-turns-handwriting-into-computer-code/#respond Tue, 29 Nov 2022 16:24:01 +0000 https://tech.cornell.edu/?p=25603 By Louis DiPietro, Cornell Ann S. Bowers College of Computing and Information Science A Cornell team has created an interface that allows users to handwrite and sketch within computer code – a challenge to conventional coding, which typically relies on typing. The pen-based interface, called Notate, lets users of computational, digital notebooks – such as Jupyter notebooks, […]

The post Programming Tool Turns Handwriting Into Computer Code appeared first on Cornell Tech.

]]>
By Louis DiPietro, Cornell Ann S. Bowers College of Computing and Information Science

A Cornell team has created an interface that allows users to handwrite and sketch within computer code – a challenge to conventional coding, which typically relies on typing.

The pen-based interface, called Notate, lets users of computational, digital notebooks – such as Jupyter notebooks, which are web-based and interactive – to open drawing canvases and handwrite diagrams within lines of traditional, digitized computer code.

Powered by a deep learning model, the interface bridges handwritten and textual programming contexts: Notation in the handwritten diagram can reference textual code and vice versa. For instance, Notate recognizes handwritten programming symbols, like “n,” and then links them up to their typewritten equivalents. In a case study, users drew quantum circuit diagrams inside of Jupyter notebook code cells.

The tool was described in “Notational Programming for Notebook Environments: A Case Study with Quantum Circuits,” presented at the ACM Symposium on User Interface Software and Technology, held Oct. 29 through Nov. 2 in Bend, Oregon. The paper, whose lead author is Ian Arawjo, doctoral student in the field of information science, won an honorable mention at the conference.

“A system like this would be great for data science, specifically with sketching plots and charts that then inter-operate with textual code,” Arawjo said. “Our work shows that the current infrastructure of programming is actually holding us back. People are ready for this type of feature, but developers of interfaces for typing code need to take note of this and support images and graphical interfaces inside code.”

Arawjo said the work demonstrates a new path forward by introducing artificial intelligence-powered, pen-based coding at a time when drawing tablets are becoming more widely used.

“Tools like Notate are important because they open us up to new ways to think about what programming is, and how different tools and representational practices can change that perspective,” said Tapan Parikh, associate professor of information science at Cornell Tech and a paper co-author.

Other co-authors are: Anthony DeArmas ’22; Michael Roberts, a doctoral student in the field of computer science; and Shrutarshi Basu, Ph.D. ’18, currently a visiting assistant professor of computer science at Middlebury College.

Louis DiPietro is a writer for the Cornell Ann S. Bowers College of Computing and Information Science.

This story originally appeared in the Cornell Chronicle.

The post Programming Tool Turns Handwriting Into Computer Code appeared first on Cornell Tech.

]]>
https://tech.cornell.edu/news/programming-tool-turns-handwriting-into-computer-code/feed/ 0
Personal Sensing at Work: Tracking Burnout, Balancing Privacy https://tech.cornell.edu/news/personal-sensing-at-work-tracking-burnout-balancing-privacy/ https://tech.cornell.edu/news/personal-sensing-at-work-tracking-burnout-balancing-privacy/#respond Fri, 18 Nov 2022 21:41:31 +0000 https://tech.cornell.edu/?p=25502 By Tom Fleischman, Cornell Chronicle Personal sensing data could help monitor and alleviate stress among resident physicians, although privacy concerns over who sees the information and for what purposes must be addressed, according to collaborative research from Cornell Tech. Burnout in all types of workplaces is on the rise in the U.S., where the “Great Resignation” […]

The post Personal Sensing at Work: Tracking Burnout, Balancing Privacy appeared first on Cornell Tech.

]]>
By Tom Fleischman, Cornell Chronicle

Personal sensing data could help monitor and alleviate stress among resident physicians, although privacy concerns over who sees the information and for what purposes must be addressed, according to collaborative research from Cornell Tech.

Burnout in all types of workplaces is on the rise in the U.S., where the “Great Resignation” and “silent quitting” have entered the lexicon in recent years. This is especially true in the health care industry, which has been strained beyond measure due to the COVID-19 pandemic.

Stress is physical as well as mental, and evidence of stress can be measured through the use of smartphones, wearables and personal computers. But data collection and analysis – and the larger questions of who should have access to that information, and for what purpose – raise myriad sociotechnical questions.

“We’ve looked at whether we can measure stress in workplaces using these types of devices, but do these individuals actually want this kind of system? That was the motivation for us to talk to those actual workers,” said Daniel Adler, co-lead author with fellow doctoral student Emily Tseng of “Burnout and the Quantified Workplace: Tensions Around Personal Sensing Interventions for Stress in Resident Physicians,” published Nov. 11 Proceedings of the ACM on Human-Computer Interaction.

The paper is being presented at the ACM Conference on Computer-Supported Cooperative Work (CSCW) and Social Computing, taking place virtually Nov. 8-22.

Adler and Tseng worked with senior author Tanzeem Choudhury, the Roger and Joelle Burnell Professor in Integrated Health and Technology at the Jacobs Technion-Cornell Institute at Cornell Tech. Contributors came from Zucker School of Medicine at Hofstra/Northwell Health and Zucker Hillside Hospital.

The resident physician’s work environment is a bit different from the traditional apprenticeship situation in that their supervisor, the attending physician, is also their mentor. That can blur the lines between the two.

“That’s a new context,” Tseng said. “We don’t really know what the actual boundaries are there, or what it looks like when you introduce these new technologies, either. So you need to try and decide what those norms might be to determine whether this information flow is appropriate in the first place.”

Choudhury and her group addressed these issues through a study involving resident physicians at an urban hospital in New York City. After hourlong interviews with residents on Zoom, the residents and their attendings were given mockups of a Resident Wellbeing Tracker, a dashboard with behavioral data on residents’ sleep, activity and time working; self-reported data on residents’ levels of burnout; and a text box where residents could characterize their well-being.

Tseng said the residents were open to the idea of using technology to enhance well-being. “They were also very interested in the privacy question,” she said, “and how we could use technologies like this to achieve those positive ends while still balancing privacy concerns.”

The study featured two intersecting use cases: self-reflection, in which the residents view their behavioral data, and data sharing, in which the same information is shared with their attendings and program directors for purposes of intervention.

Among the key findings: Residents were hesitant to share their data without the assurance that supervisors would use it to enhance their well-being. There is also a question of anonymity, which was more likely with more participation. But greater participation would hurt the potential usefulness of the program, since supervisors would not be able to identify which residents were struggling.

“This process of sharing personal data is somewhat complicated,” Adler said. “There is a lot of interesting continuing work that we’re involved in that looks at this question of privacy, and how you present yourself through your data in more-traditional mental health care settings. It’s not as simple as, ‘They’re my doctor, therefore I’m comfortable sharing this data.’”

The authors conclude by referring to the “urgent need for further work establishing new norms around data-driven workplace well-being management solutions that better center workers’ needs, and provide protections for the workers they intend to support.”

Other contributors included Emanuel Moss, a postdoctoral researcher at Cornell Tech; David Mohr, a professor in the Feinberg School of Medicine at Northwestern University; as well as Dr. John Kane, Dr. John Young and Dr. Khatiya Moon from Zucker Hillside Hospital.

The research was supported by grants from the National Institute of Mental Health, the National Science Foundation and the Digital Life Initiative at Cornell Tech.

This story originally appeared in the Cornell Chronicle.

The post Personal Sensing at Work: Tracking Burnout, Balancing Privacy appeared first on Cornell Tech.

]]>
https://tech.cornell.edu/news/personal-sensing-at-work-tracking-burnout-balancing-privacy/feed/ 0
Online Microaggressions Strongly Impact Disabled Users https://tech.cornell.edu/news/online-microaggressions-strongly-impact-disabled-users/ https://tech.cornell.edu/news/online-microaggressions-strongly-impact-disabled-users/#respond Thu, 27 Oct 2022 16:50:40 +0000 https://tech.cornell.edu/?p=25389 By Patricia Waldron, Cornell Ann S. Bowers College of Computing and Information Science In person, people with disabilities often experience microaggressions – comments or subtle insults based on stereotypes. New types of microaggressions play out online as well, according to new Cornell-led research. The study finds those constant online slights add up. Microaggressions affect self-esteem and […]

The post Online Microaggressions Strongly Impact Disabled Users appeared first on Cornell Tech.

]]>
By Patricia Waldron, Cornell Ann S. Bowers College of Computing and Information Science

In person, people with disabilities often experience microaggressions – comments or subtle insults based on stereotypes.

New types of microaggressions play out online as well, according to new Cornell-led research.

The study finds those constant online slights add up. Microaggressions affect self-esteem and change how people with disabilities use social media. And due to their subtlety, microaggressions can be hard for algorithms to detect, the authors warn.

“This paper brings a new perspective on how social interactions shape what equitable access means online and in the digital world,” said Sharon Heung, a doctoral student in the field of information science. Heung presented the study, “Nothing Micro about It: Examining Ableist Microaggressions on Social Media,” Oct. 26 at ASSETS 2022, the Association for Computing Machinery SIGACCESS Conference on Computers and Accessibility.

When microaggressions occur in live settings, they are often ephemeral, with few bystanders. “When they happen on social media platforms, it’s happening in front of a large audience – the scale is completely different and then they live on, for people to see forever,” said co-author Aditya Vashistha, assistant professor of information science in the Cornell Ann S. Bowers College of Computing and Information Science.

Additionally, social media platforms can amplify microaggressions, potentially spreading misinformation. “We’re very concerned about how it’s shaping the way the broader audience thinks about disability and disabled people,” said co-author Megh Marathe, assistant professor of media, information, bioethics, and social justice at Michigan State University.

Heung and co-author Mahika Phutane, a doctoral student in the field of computer science, interviewed 20 volunteers who self-identified as having various disabilities and who were active on social media platforms. The participants were asked to describe subtle discrimination and microaggressions they had experienced and the impact they had on their lives.

Patronizing comments like, “You’re so inspiring,” were the most common, along with infantilizing posts, like “Oh, you live by yourself?” People also asked inappropriate questions about users’ personal lives and made assumptions about what the person could do or wear based on their disability. Some users were told they were lying about their disability, or that they didn’t have one, especially if the disability was invisible, such as a mental health condition.

The researchers categorized the responses into 12 types of microaggressions. Most fit in categories previously recognized in offline interactions, but two were unique to social media. The first was “ghosting” or ignored posts. The second involved platforms that were inaccessible for people with disabilities. For example, some users said they felt unwelcome when people did not add alt text to photos or used text colors they couldn’t discern. One person with dwarfism said her posts were continually removed because she kept getting flagged as a minor.

After experiencing a microaggression, users had to decide how to respond. Regardless of whether they ignored the comment, reported it or tried to educate the other person, participants said it took an emotional toll. Many took breaks from social media or limited the information they shared online.

“Addressing this problem is really hard,” said Phutane. “Social media is driven to promote engagement. If they educate the perpetrator, then that original post will just get more and more promoted.”

The participants proposed that platforms should automatically detect and delete microaggressions, or a bot could pop up with information about disabilities.

Most social media platforms already have moderation tools – but reporting systems are sometimes flawed, lack transparency and can misidentify harassment. And microaggressions can be hard for automated systems to detect. Unlike hate speech, where algorithms can search for specific words, microaggressions are more nuanced and context-dependent.

Once the scope and types of microaggressions experienced by people from marginalized groups are better understood, the researchers say tools can be developed to limit the burden of dealing with them. These issues are important to address, especially with the potential expansion of virtual reality and the metaverse.

“We need to be especially vigilant and conscious of how these real-world interactions get transferred over to online settings,” said co-author Shiri Azenkot, associate professor of information science at the Jacobs Technion-Cornell Institute at Cornell Tech and Cornell Bowers CIS. “It’s not just social media interactions – we’re also going to see more interactions in virtual spaces.”

This work was partially supported by the National Science Foundation Graduate Research Fellowship and the University of California President’s Postdoctoral Fellowship.

Patricia Waldron is a writer for the Cornell Ann S. Bowers College of Computing and Information Science.

This story originally appeared in the Cornell Chronicle.

The post Online Microaggressions Strongly Impact Disabled Users appeared first on Cornell Tech.

]]>
https://tech.cornell.edu/news/online-microaggressions-strongly-impact-disabled-users/feed/ 0
Do Trucks Mean Trump? AI Shows How Humans Misjudge Images https://tech.cornell.edu/news/do-trucks-mean-trump-ai-shows-how-humans-misjudge-images/ https://tech.cornell.edu/news/do-trucks-mean-trump-ai-shows-how-humans-misjudge-images/#respond Tue, 20 Sep 2022 17:11:04 +0000 https://tech.cornell.edu/?p=25265 By Patricia Waldron A study on the types of mistakes that humans make when evaluating images may enable computer algorithms that help us make better decisions about visual information, such as while reading an X-ray or moderating online content. Researchers from Cornell and partner institutions analyzed more than 16 million human predictions of whether a neighborhood […]

The post Do Trucks Mean Trump? AI Shows How Humans Misjudge Images appeared first on Cornell Tech.

]]>
By Patricia Waldron

A study on the types of mistakes that humans make when evaluating images may enable computer algorithms that help us make better decisions about visual information, such as while reading an X-ray or moderating online content.

Researchers from Cornell and partner institutions analyzed more than 16 million human predictions of whether a neighborhood voted for Joe Biden or Donald Trump in the 2020 presidential election based on a single Google Street View image. They found that humans as a group performed well at the task, but a computer algorithm was better at distinguishing between Trump and Biden country.

The study also classified common ways that people mess up, and identified objects — such as pickup trucks and American flags — that led people astray.

“We’re trying to understand, where an algorithm has a more effective prediction than a human, can we use that to help the human, or make a better hybrid human-machine system that gives you the best of both worlds?” said first author J.D. Zamfirescu-Pereira, a graduate student at the University of California at Berkeley.

He presented the work, entitled “Trucks Don’t Mean Trump: Diagnosing Human Error in Image Analysis,” at the 2022 Association for Computing Machinery (ACM) Conference on Fairness, Accountability, and Transparency (FAccT).

Recently, researchers have given a lot of attention to the issue of algorithmic bias, which is when algorithms make errors that systematically disadvantage women, racial minorities, and other historically marginalized populations.

“Algorithms can screw up in any one of a myriad of ways and that’s very important,” said senior author Emma Pierson, assistant professor of computer science at the Jacobs Technion-Cornell Institute at Cornell Tech and the Technion with the Cornell Ann S. Bowers College of Computing and Information Science. “But humans are themselves biased and error-prone, and algorithms can provide very useful diagnostics for how people screw up.”

The researchers used anonymized data from a New York Times interactive quiz that showed readers snapshots from 10,000 locations across the country and asked them to guess how the neighborhood voted. They trained a machine learning algorithm to make the same prediction by giving it a subset of Google Street View images and supplying it with real-world voting results. Then they compared the performance of the algorithm on the remaining images with that of the readers.

Overall, the machine learning algorithm predicted the correct answer about 74% of the time. When averaged together to reveal “the wisdom of the crowd,” humans were right 71% of the time, but individual humans scored only about 63%.

People often incorrectly chose Trump when the street view showed pickup trucks or wide-open skies. In a New York Times article, participants noted that American flags also made them more likely to predict Trump, even though neighborhoods with flags were evenly split between the candidates.

The researchers classified the human mistakes as the result of bias, variance, or noise — three categories commonly used to evaluate errors from machine learning algorithms. Bias represents errors in the wisdom of the crowd — for example, always associating pickup trucks with Trump. Variance encompasses individual wrong judgments — when one person makes a bad call, even though the crowd was right, on average. Noise is when the image doesn’t provide useful information, such as a house with a Trump sign in a primarily Biden-voting neighborhood.

Being able to break down human errors into categories may help improve human decision-making. Take radiologists reading X-rays to diagnose a disease, for example. If there are many errors due to bias, then doctors may need retraining. If, on average, diagnosis is successful but there is variance between radiologists, then a second opinion might be warranted. And if there is a lot of misleading noise in the X-rays, then a different diagnostic test may be necessary.

Ultimately, this work can lead to a better understanding of how to combine human and machine decision-making for human-in-the-loop systems, where humans give input into otherwise automated processes.

“You want to study the performance of the whole system together — humans plus the algorithm, because they can interact in unexpected ways,” Pierson said.

Allison Koenecke, assistant professor of information science, Nikhil Garg, assistant professor of operations research and information engineering within the College of Engineering at Cornell Tech and the Jacobs Institute, and colleagues from Stanford University also contributed to the study.

Patricia Waldron is a writer for the Cornell Ann S. Bowers College of Computing and Information Science.

This story originally appeared on the Cornell Bowers CIS news site.

The post Do Trucks Mean Trump? AI Shows How Humans Misjudge Images appeared first on Cornell Tech.

]]>
https://tech.cornell.edu/news/do-trucks-mean-trump-ai-shows-how-humans-misjudge-images/feed/ 0