<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: MGeraldo</title>
    <description>The latest articles on Forem by MGeraldo (@mggdev).</description>
    <link>https://forem.com/mggdev</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/mggdev"/>
    <language>en</language>
    <item>
      <title>Deepfake and Elections: Challenges and Implications for Democracy</title>
      <dc:creator>MGeraldo</dc:creator>
      <pubDate>Mon, 26 Aug 2024 18:39:13 +0000</pubDate>
      <link>https://forem.com/mggdev/deepfake-and-elections-challenges-and-implications-for-democracy-2e24</link>
      <guid>https://forem.com/mggdev/deepfake-and-elections-challenges-and-implications-for-democracy-2e24</guid>
      <description>&lt;p&gt;The discourse surrounding election integrity and information transmission has taken on a new angle with the emergence of deepfake technologies. Deepfakes, which fabricate films and audios using cutting-edge AI algorithms, have the ability to erode public confidence in democratic institutions and sway public opinion in extremely unsettling ways. This article explores the consequences and challenges for democracy of using deepfakes in electoral scenarios.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are Deepfakes?
&lt;/h2&gt;



&lt;p&gt;"Deep learning" and "fake" are combined to get "deepfake". It alludes to a method of combining sounds and visuals via artificial intelligence. When misused, this technology can be used to make people feel uncomfortable or fabricate stories. Put differently, deepfakes are audiovisual materials produced by artificial intelligence systems that modify or create photos and movies using deep learning methods. These systems use generative neural networks to make people say or do things they never said or did, which makes it difficult to identify fake information. Convolutional neural networks that have been trained on massive picture datasets to duplicate facial expressions and movements convincingly are the source of deepfakes, according to the MIT Technology Review (Vincent, 2018). An example is the Buzzfeed video of former US President Barack Obama, which can be viewed at this link: &lt;a href="https://www.youtube.com/watch?v=cQ54GDm1eL0&amp;amp;t=20s" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=cQ54GDm1eL0&amp;amp;t=20s&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deepfakes in Elections: Cases and Impacts
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvg6t5opj9vp5g5oqb3vb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvg6t5opj9vp5g5oqb3vb.png" alt="Deepfake Observer image" width="736" height="387"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;h4&gt;
  
  
  1.  2016 US Elections
&lt;/h4&gt;



&lt;p&gt;Deepfakes and other digital manipulation techniques raised questions during the 2016 US presidential election. While deepfakes were not extensively reported during that election, other forms of manipulated media, like fake news and social media manipulation, contributed to the climate of misinformation. In his book "Liars and Outliers," Bruce Schneier examines how voter integrity and public trust are impacted by disinformation and digital manipulation (Schneier, 2015). &lt;/p&gt;

&lt;h4&gt;
  
  
  2. 2019 Indian Elections
&lt;/h4&gt;

&lt;p&gt;Misinformation was disseminated during the 2019 Indian general elections through the use of deepfakes and other modified content. According to an Observer Research Foundation study, voters were swayed by the deployment of fabricated recordings that were intended to discredit politicians. These films frequently portrayed politicians in deceptive situations or made exaggerated statements about their goals and deeds.  (Kumar et al., 2019).&lt;/p&gt;

&lt;h4&gt;
  
  
  3. 2017 French Elections
&lt;/h4&gt;

&lt;p&gt;During the 2017 French presidential election, there were concerns that deepfakes could influence public opinion. Although there were no widely documented cases of deepfakes, Emmanuel Macron’s campaign was the target of numerous disinformation attacks. The National Commission for Informatics and Freedoms (CNIL) warned of the need for regulations and safeguards to prevent the spread of manipulated content, including deepfakes (CNIL, 2017).&lt;/p&gt;

&lt;h2&gt;
  
  
  Regulation and Public Policies in Brazil
&lt;/h2&gt;

&lt;p&gt;Brazil is currently developing regulations and public policies related to deepfakes and disinformation, reflecting growing concerns about digital manipulation in elections and public communications.&lt;/p&gt;

&lt;h4&gt;
  
  
  1.  Civil Rights Framework for the Internet
&lt;/h4&gt;

&lt;p&gt;Although the law does not specifically address deepfakes, it sets guidelines on the liability of Internet service providers and personal data protection, which may have implications for the regulation of false content.&lt;/p&gt;

&lt;h4&gt;
  
  
  2. Brazilian Data Protection Law (LGPD)
&lt;/h4&gt;

&lt;p&gt;The General Data Protection Law (LGPD), Law No. 13.709/2018, regulates the processing of personal data and could impact the creation and distribution of deepfakes, particularly when such content uses personal data to create or manipulate false videos and audios&lt;/p&gt;

&lt;h4&gt;
  
  
  3. Bill No. 2630/2020
&lt;/h4&gt;

&lt;p&gt;Bill No. 2630/2020, also known as the Fake News Act, the law aims to combat the spread of false information and increase transparency among social networks and messaging services. The bill seeks to establish rules for content identification and tracking, including the need for user identification and the liability of platforms for content distributed through their services. The proposed regulations could provide a basis for combating deepfakes and other types of digital manipulation(Brasil, 2020).&lt;/p&gt;

&lt;h4&gt;
  
  
  4. Proactive Measures
&lt;/h4&gt;

&lt;p&gt;In addition to legislation, proactive measures are essential to educate the public about the risks of deepfakes and promote digital literacy. Awareness-raising campaigns and promoting safe information consumption practices are essential to mitigate the negative impact of misinformation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Implications for Democracy
&lt;/h2&gt;

&lt;h4&gt;
  
  
  1. Erosion of Public Trust
&lt;/h4&gt;

&lt;p&gt;The ability of deepfakes to create extremely realistic visual representations could lead to a significant erosion of public trust in information and institutions. According to a study published in Nature Communications, exposure to deepfakes reduced citizens’ trust in the accuracy of information, even when such content has been debunked (Chesney &amp;amp; Citron, 2019). The spread of deepfakes could lead voters to question the authenticity of campaign and candidate statements, undermining the integrity of the electoral process&lt;/p&gt;

&lt;h4&gt;
  
  
  2. Manipulating  Public Opinion
&lt;/h4&gt;

&lt;p&gt;Deepfakes have the potential to effectively manipulate public opinion. Research shows that fake visual and audio content can be used to reinforce existing biases or create new rumors (Marwick &amp;amp; Lewis, 2017). This can be used to support certain candidates or parties and influence voting behavior in ways that are not easily detected.&lt;/p&gt;

&lt;h4&gt;
  
  
  3. Challenges for Regulation and Verification
&lt;/h4&gt;

&lt;p&gt;Detecting deepfakes represents a significant challenge for regulators and social media platforms. Detection technologies are under development but still have limitations regarding effectiveness and speed in identifying manipulated content. The Journal of Cyber Policy highlights the urgent need for technical and policy solutions to address the challenges posed by deepfakes, including the creation of verification systems and the promotion of media literacy (Tucker et al., 2018).&lt;/p&gt;

&lt;h2&gt;
  
  
  Measures and Recommendations
&lt;/h2&gt;

&lt;p&gt;To attenuate  the probabilities  associated with deepfakes, a multifaceted approach is necessary:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Development of Detection Technologies:&lt;/strong&gt; Subsidize research to meliorate deepfake recognition technologies can help detect and remove influenced content more efficaciously.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Regulation and Public Policies:&lt;/strong&gt;  Establishing specific regulations for the use and propagation of deepfakes can help produce a safer ecosystem for information and communication. The European Union Agency for Cybersecurity indicate  that regulation should add measures against the production and sharing of detrimental deepfakes (ENISA, 2020).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Education and Awareness:&lt;/strong&gt; Promoting media literacy among citizens can help increase awareness of the risks of deepfakes and improve the ability to identify false information.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Deepfakes pose a significant threat to the integrity of elections and public trust in democratic institutions. The ability to create convincing false audiovisual content can be used to manipulate public opinion and mislead voters. To address these challenges, it is important to develop detection technologies, implement effective regulations, and improve media literacy. Collaboration between governments, technology companies, and civil society is essential to protect the integrity of the electoral process and ensure a healthy and informed democracy.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;p&gt;Brasil. (2020). Projeto de Lei nº 2630/2020. Câmara dos Deputados. &lt;a href="https://www.camara.leg.br/proposicoesWeb/fichadetramitacao?idProposicao=2207176" rel="noopener noreferrer"&gt;https://www.camara.leg.br/proposicoesWeb/fichadetramitacao?idProposicao=2207176&lt;/a&gt;&lt;br&gt;
Chesney, B., &amp;amp; Citron, D. K. (2019). Deepfakes and the threat to democracy. Nature Communications, 11(1), 1-9. &lt;a href="https://doi.org/10.1038/s41467-019-13750-x" rel="noopener noreferrer"&gt;https://doi.org/10.1038/s41467-019-13750-x&lt;/a&gt;&lt;br&gt;
CNIL. (2017). Rapport sur les fake news. Commission Nationale de l'Informatique et des Libertés. &lt;a href="https://www.cnil.fr/en/rapport-sur-les-fake-news" rel="noopener noreferrer"&gt;https://www.cnil.fr/en/rapport-sur-les-fake-news&lt;/a&gt;&lt;br&gt;
ENISA. (2020). Threat landscape for deepfake technologies. European Union Agency for Cybersecurity. &lt;a href="https://www.enisa.europa.eu/publications/threat-landscape-for-deepfake-technologies" rel="noopener noreferrer"&gt;https://www.enisa.europa.eu/publications/threat-landscape-for-deepfake-technologies&lt;/a&gt;&lt;br&gt;
Kumar, P., Sharma, M., &amp;amp; Gupta, A. (2019). Disinformation and deepfakes in Indian elections. Observer Research Foundation. &lt;a href="https://www.orfonline.org/research/disinformation-and-deepfakes-in-indian-elections" rel="noopener noreferrer"&gt;https://www.orfonline.org/research/disinformation-and-deepfakes-in-indian-elections&lt;/a&gt;&lt;br&gt;
BUZZFEEDVIDEO. You Won’t Believe What Obama Says In This Video! YouTube, Posted: August 26, 2024. &lt;a href="https://www.youtube.com/watch?v=xJaCHrMX9ys" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=xJaCHrMX9ys&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>learning</category>
      <category>chatgpt</category>
      <category>community</category>
    </item>
  </channel>
</rss>
