The deepfake threat just got a little more personal

The deepfake threat just got a little more personal The deepfake threat just got a little more personal

When answering personality questionnaires, the AI ​​clones’ responses differed little from their human counterparts. They performed particularly well when it came to reproducing answers to personality questionnaires and determining social attitudes. But they were less accurate when it came to predicting behavior in interactive games that involved economic decisions.  

A question of purpose

The impetus for the development of the simulation agents was the possibility of using them to conduct studies that would be expensive, impractical, or unethical with real human subjects, the scientists explain. For example, the AI ​​models could help to evaluate the effectiveness of public health measures or better understand reactions to product launches. Even modeling reactions to important social events would be conceivable, according to the researchers.  

“General-purpose simulation of human attitudes and behavior—where each simulated person can engage across a range of social, political, or informational contexts—could enable a laboratory for researchers to test a broad set of interventions and theories,” the researchers write.

Add a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use