1 Introduction
This essay accounts for a novel way to explore generative artificial intelligence (GenAI) based on the AI Methodology Map1. The map is a pedagogical2 resource (interactive toolkit and teaching material) and theoretical framework designed to structure, visually represent, and explore generative artificial intelligence (GenAI) web-based applications (apps) for digital methods-led research. In particular, the explorations of apps and code-based platforms mediating access to GenAI foundation models (Burkhardt & Rieder, 2024). The map is an interactive toolkit and teaching material to support workshops and AI sprints, and it is also embodied in a static representation which covers theoretical orientation principles for engaging with GenAI. While we expect the reader to take these perspectives together, they can also serve separate purposes if desired.
The AI Methodology Map is based on three core principles: the theoretical and practical foundations of digital methods (Marres, 2017; Omena, 2021a), visual thinking and documentation of data practices (Arnheim, 1980 & 2001; Mauri et al., 2020), and interdisciplinary research efforts (Gray et al., 2022). Unlike method protocols and recipes that present “how to” steps to achieve a specific research outcome while ensuring reliable results (see Bounegru et al., 2017), the map prioritizes ways of knowing GenAI. That is understanding what to look at when leveraging GenAI to advance digital methods. Therefore, the map expands established digital methods practices, i.e., enacted by the repurpose of crawling, scraping, and API calling for social and cultural research, by enquiring and experimenting with what counts in practice when repurposing GenAI.
The AI Methodology Map differs from quick responses to the AI impact and (mis)uses with precautionary measures, as it is not focused on mandating transparent disclosure of the use and performance of large language models (LLMs) (see Stokel-Walker & Noorden, 2023; Dwivedi et al., 2023) or promoting a framework that primarily centres on the ethical issues and misuses of GenAI in educational settings (see Russel Group, 2023; Popescu & Schut, 2023; Baidoo-Anu & Ansah, 2023). Although acknowledging these as critical factors, we argue that the effort to understand GenAI from uncomplicated and technical perspectives — as the map proposes — is equally relevant. The map, thus, addresses other challenges of “repurposing” GenAI (technology) for social research, which involves most of all, a mindset (see Franklin, 1990; Marres, 2017) encompassing conceptual, technical, and empirical dimensions (see Hoel, 2012; Omena, 2022; Rieder, 2020). By creating space for GenAI to sit through hands-on practice, the map aims to surface foundational layers in discussions for social research and contributes to the field of digital methods epistemology.
This essay outlines the AI Methodology Map principles, its system of methods, educational entry-points, and applications. The organization is as follows: First, we review GenAI methods, discussing how to access them and their current uses in social research and the classroom context. Second, we define the map and unpack the theory it embodies, navigating through the three interconnected methods constituting it: making room for Generative AI (method 1); repurposing GenAI apps and outputs (method 2); and designing digital methods-oriented projects with GenAI outputs (method 3). Method 1 focuses on ways to become familiar with GenAI conceptually, technically and empirically. Method 2 introduces new ways to use GenAI and repurposing prompting techniques as research methods. Method 3 elicits the exploration of designing digital methods projects for analyzing GenAI models, outputs, or interfaces. Third, we discuss how the map bridges GenAI, technicity, applications and the practice of digital methods, demonstrating its potential and reproducibility in three educational settings. Finally, a case study demonstrates the AI Methodology Map’s application, employing a network vision methodology (Omena, 2021b) to analyze image collections generated by nine prominent GenAI apps. This study investigates algorithmic race stereotypes and compares visual models’ responses to the same prompt. We conclude by discussing methodological challenges and addressing three provocations.
This essay’s main contribution is the introduction and development of the “AI Methodology Map”, a dual-purpose interactive toolkit and theoretical framework designed for exploring GenAI applications in digital methods-led research within the Social Sciences and Humanities. By functioning as both a theoretical framework and a practical tool, the map bridges a gap between theoretical perspectives and empirical engagement with GenAI, and facilitates its integration into educational and research contexts.
3 The Map: An Introduction
This section introduces the AI Methodology Map, represented in Figure 2, as a theoretical framework that outlines principles of orientation for engaging with GenAI. It offers the map’s definition, unpacks the theory it bears, and navigates three interconnected methods to understand, explore, and develop digital methods projects with GenAI.
3.1 AI Methodology Map: What Is and What For?
The AI Methodology Map (Figure 2) is a pedagogical resource (interactive toolkit and teaching material) and theoretical framework designed to structure, visually represent, and explore GenAI web-based applications for digital methods-led research. The map is a conceptual, empirical and interactive structure that organizes knowledge and methodological frameworks for engaging with GenAI. It combines methods crafted to enhance comprehension of GenAI through practical applications that help researchers and students develop ways of understanding, thinking about, and creating knowledge using GenAI. Theoretically, it covers perspectives and discussions on empirical engagement with GenAI in the Social Sciences and Humanities. As an external object (digital version), the map is materialized as teaching material and an interactive toolkit for exploring GenAI Apps in the context of digital methods research. While we expect the reader to take these perspectives together, they can also serve separate purposes if desired.
The methodology map presumes that users possess a basic understanding of generative methods. Its representation serves not only as a visual guidance for practical activities but also aims to make the “invisible” (see Mauri & Ciuccarelli, 2016) aspects of GenAI methods more visible and understandable. Thus, this map has a different purpose from method protocols and recipes, which record and present predefined, structured methods, techniques, or procedures designed to achieve a specific research outcome (Bounegru et al., 2017; Mauri et al., 2020). While method protocols ensure the reliability and validity of empirical findings (Cross, 2001), explaining method design and implementation (what and how it was done), the map we introduce prioritizes processes of acquiring technical knowledge for method reasoning and practicing. By focusing on “what to look at”, the map elicits ways of knowing GenAI while understanding when and why to value them in a methodological ensemble (see Omena, 2021a). In this sense, the map’s purpose and outputs move towards the epistemology of digital methods and its critical reflections rather than final research products.
Regarding reproducibility, although the map is developed for implementation in workshops or AI sprints, it allows anyone to independently repeat the procedures without needing mediators. Individuals can use the map’s essential theoretical points as a guide to engage with GenAI and take advantage of the external teaching resources. In the following sections, we will introduce the theoretical framework that underpins the map and its system of methods, which elicit attitudes of making room for, repurposing, and designing projects with GenAI.
3.2 Three Principles: Theoretical Framework
Three principles underlie the AI Methodology Map: (i) the practical and theoretical foundations of digital methods, (ii) visual thinking, and data practice documentation, along with (iii) interdisciplinary research endeavours. We will discuss each individually and then illustrate how they intersect within the interconnected methods depicted on the map.
The map embodies a technicity perspective on the practice of digital methods that considers medium-technicity to (re)think the design and implementation of these methods (Omena, 2021a; 2022). This perspective attends to developing a specific mindset, modes of thinking, and technological awareness required by the methods (Marres, 2017; Rogers & Lewthwaite, 2019; Rogers, 2013) — or technology itself (see Franklin, 1990), yet it embodies a domain of knowledge encompassing conceptual, technical, and empirical dimensions (see Hoel, 2012; Rieder, 2020) about GenAI and the necessary computational media requested to work with the methods. On the one hand, a technicity perspective is closely related to relational processes between the researcher and the computational media required to advance the methods, i.e. the iterative and navigational research practices that constitute a methodological ensemble, technical and practical knowledge. On the other, it refers to the researcher’s attitude to understanding GenAI and computational media conceptually, technically, and empirically, in isolation and comparison and on their terms, while knowing how and when to appreciate their substance, value and agency (Omena, 2021a; 2022). This framework, as elucidated in the AI Methodology Map, encourages a grasp of GenAI methods and applications on their terms and relational contexts within a methodological ensemble, as demonstrated in section 5.
The second principle underlying the map incorporates visual thinking and data practice documentation to acquire and produce knowledge about GenAI. Visual thinking on the map guides intuitive and intellectual modes of thinking that closely interact, making it difficult to separate them (see Arnheim, 1980; 2001). The map’s visual representations support processes of visually acquiring knowledge through thought and experience. They are designed to connect the map user with core aspects of GenAI and its exploratory applications.
Visual thinking in the map is not just a feature but a comprehensive approach to introducing and revealing GenAI through the context of digital methods practices. For example, it allows users to easily navigate three interlinked methods that offer detailed procedures and a clear set of instructions for overcoming the challenges of repurposing GenAI for social research. This approach, where the process of acquiring and producing knowledge involves the interpenetration of theoretical, practical, and technical modes of thinking, is further enhanced by integrating visual data practice documentation. The latter aids in recognizing the “non-objective, situated, and interpretative nature” of data practices (Mauri & Ciuccarelli, 2016; Mauri et al., 2020). For example, the map guides users in structuring and recording each step and decision via methodological workflows. The visual aids are particularly relevant as they facilitate a practical and technical understanding of GenAI apps, encouraging critical, reflective, and relational thinking. Visual thinking is also applied through interactive visualization, which provides an initial technical and practical knowledge of GenAI apps, models, or code.
The third principle explains how the map fosters interdisciplinary research efforts that combine digital methods, information design, and media studies. The AI Methodology Map is based on research-led teaching (see Gray et al., 2022; Rogers & Lewthwaite, 2019) and collaborative approaches among SUPSI, NOVA, and EPFL that involve MA courses such as Interaction Design, New Media and Web Practices, Space and Communication, and an MSc in Transition, Innovation, and Sustainability Environments. The proposed methodology combines the authors’ research background and classroom context to advance research while teaching about GenAI, its potential for design and media studies, and its use as a research method.
Together, these principles correspond to and inform five practical entry points for leading the map’s application (see Figure 2), which promote understanding and engagement with GenAI apps. We will discuss and illustrate them practically in Section 4.
3.3 Three Methods: Make Room, Repurpose, and Design Projects with GenAI
The AI Methodology Map combines three interconnected methods designed to understand, explore, and develop projects with GenAI (Figure 2). These methods follow a technicity perspective to engage with GenAI (Omena, 2021a) and are better suited for individual and group activities in AI sprints or workshops.
Making room for GenAI (Figure 2, Method 1) explores generative methods by navigating an interactive visualization6 while responding to crucial questions about GenAI apps, supporting the map user’s conceptual, technical, and empirical familiarization with them. The interactive visualization contains structured information about various generative methods mediated by GenAI proprietary applications and open-source models. Whereas five key questions ask what generative method and what LLM is operating. Also, is API documentation available, and can we identify the dataset used to train the model?7 What are the limitations or potential biases one might encounter in the LLM currently in use? Is it an open-source or proprietary model? Who developed it? What type of input is required? What kind of output does one get? What is required to use this app or open-source code? If possible, adjust the model temperature; what does it mean? The explorations and findings should be documented in a shared file8 (e.g. using Figma), which allows for and empowers collective discussions among all involved. The proposed activities encourage efforts to become acquainted with GenAI methods as carriers of meaning — here, employing GenAI for social research. Method 1 showcases that to know GenAI apps or open-source models, one must do more than interact with them. So, when making room for GenAI, the initial fascination with its methods is immediately balanced with a critical and technical awareness of what they are and the key elements making them operate.
Repurposing GenAI (Figure 2, Method 2) for social research or media research is a method that creates new ways of using GenAI and prompting engineering techniques without fundamentally changing their nature (see Rogers, 2013; Noortje, 2017). In other words, the creative use of prompting and their outputs, GenAI apps’ interfaces or code as research methods or objects of critique. Repurposing refers to established digital methods practices for conducting research using materials not initially created or intended for that purpose, such as digital objects (hashtags, URLs, web entities) and web technologies and methods (crawlers, scrapers, APIs, knowledge graphs). Sections 4 and 5 demonstrate how GenAI models and generated images can be repurposed to uncover racial stereotypes. Repurposing GenAI is an extension of method 1: because I now understand GenAI, I will take a risk in repurposing it.
The map user engages with a rationale that intentionally starts with medium specificity and only then defines the research aim accordingly. Once the generative method(s) and associated web-based application or open-source model are defined, we determine the expected outputs and required inputs. For example, text, instructions, or tables could be used to generate audio, but which of these options is most compatible with the attitude of repurposing GenAI for social research? What are the reasons behind that choice, or why not opt for a given input? Then, one tries, tests, and generates prompts while “being mindful of prompt formulation” as their different settings can shape the outcomes (see Borra, 2024). Examples involve creating research personas, using search queries (see Colombo et al., 2023; Borra, 2024) and political positioning efforts (see Hartman et al., 2023; Rozado, 2023) as prompts or involving specified and underspecified prompts to capture gender bias in the LLMs training datasets, as we demonstrate in section 5. The decisions made are visually documented in a shared file, allowing all parties to see how generative methods are being repurposed.
Designing digital methods projects with and about GenAI (Figure 2, Method 3) organizes a workflow responsive to Method 2 and open to experimental and exploratory analysis of GenAI models, outputs, and interfaces. It is a way to explore new forms of knowledge production. As an extension of the previous methods, now: because I understand what aspects of GenAI can be repurposed, I will design a digital method project with it. Once again, decisions are recorded in a shared file. Many questions arise about what we should look at and how to implement methods, such as how to analyze GenAI visual, textual, and audio outputs. This essay does not answer these questions directly but illustrates possibilities mapped by applying the AI Methodology Map in research-led teaching and learn-by-doing workshops (see section 4). It also showcases that GenAI visual-generated content can be repurposed with digital methods research (see section 5).
5 The Workflow: How Can GenAI Visual-generated Content Be Repurposed Using Digital Methods?
This section describes how we repurposed GenAI methods and visual outputs from nine models to expose racial stereotypes. It begins by situating a case study triggered by the generated content of a Black woman holding a gun, despite the original prompt specifying a Disney Pixar-like image. This raised the question of to what extent the current dominant GenAI apps for image generation (see Figure 8) contribute to the perpetuation of Black women’s detrimental stereotypes. We then explain the digital methods and network vision methodology15 used and conclude by discussing the main preliminary findings.
5.1 A Black Woman Movie Star in the Favela – GenAI’s Biassed Take with a Gun!
On October 25th, 2023, Renata Souza, a state deputy from Rio de Janeiro, shared a video on Instagram (Figure 9) that exposed a racist issue with the Microsoft Bing AI generative models. Souza created a prompt to generate a Disney Pixar movie poster with her as the leading role. Her prompt was based on the following instructions:
A Disney Pixar-inspired movie poster with title “Renata Souza”. The main character is a Black woman using afro hair tied up, dress an African style blazer. The scene should be in the distinct digital art style of Pixar, a favela in the back, with a focus on character expressions, vibrant colors, and detailed textures that are characteristic of their animations, with the title “Renata Souza” (Souza, 2023).
The model generated an image of a Black woman holding a gun with a favela in the background (Figure 9). She expressed her surprise and outrage in an Instagram video16, stating that she had never mentioned weapons or violence in her instructions. She had only requested a poster featuring a Black woman in a favela, but the model added a gun to the image. “This is proof that algorithmic racism exist!”, she said. This GenAI output exposes algorithmic bias and discrimination embedded in Microsoft Bing AI models and how the data they were trained on reveals past discrimination, i.e. the association of a Black woman in a favela with violence, a poor performance when generating images of underrepresented groups (Buolamwini, 2017). It also reflects central discussions on algorithmic discrimination and oppression ingrained in artificial intelligence technologies (Noble, 2018; Sharma, 2024) despite these issues being obscured by the rhetoric of technology’s neutrality (Noble, 2013). The context of the case study uncovers how Bing AI is perpetuating these patterns. After the repercussion, Bing AI blocked the prompt used by Souza, arguing that it “might be in conflict with our content policy”. Despite that, the model had no problem generating images when we excluded Souza’s name from the prompt.
As argued by Kassom and Marino (2022), social researchers may not only account for technical understandings but also consider “the broader social impact of an algorithm’s use and whether that use contributes to or ameliorates racial inequity” (p. 2). Reports of AI bias, discrimination, and misleading or poor specific cultural representations from proprietary AI have been well documented by researchers from diverse backgrounds (see Birhane, 2022; Buolamwini & Gebru, 2018; Silva, 2023). Examples include Google Photos tagging Black people as gorillas in 2015, Stable Diffusion associating Black men with gang members in 2022, Midjourney failing to generate images of Black doctors treating white children in 2023, and Canvas feature marking Black hairstyles as insecure in 2024 (Silva, 2023). By repurposing GenAI apps and associated LLMs for image generation, this case study joins efforts in documenting GenAI race stereotypes in the context of image generation, having as a starting point Renata Souza’s Disney Pixar movie poster by Bing AI’s biassed take with a gun (Figure 9).
5.2 Designing Digital Methods Research with GenAI Visual Outputs
To what extent do current dominant GenAI models for image generation contribute to the perpetuation of Black women’s detrimental stereotypes? To answer this question, we employed network vision methods (Omena et al., 2021) to visualize and analyze images generated by nine GenAI apps and associated LLMs with computer vision and through networks (Figure 10). In other words, we repurpose GenAI visual outputs to (1) investigate the response variations among generative models when presented with identical prompts and (2) examine the presence (or absence) and characteristics of racial stereotypes, particularly associations between Black individuals and violence, across different models. Thus, the main objectives of the case study are not only to investigate GenAI-related social issues but also to interrogate generative models’ outputs, considering that these outputs “are not simply lookups or search queries over the training data” but an entry to access the “transformer intelligence” (Burkhardt & Rieder, 2024, p. 4) of the AI image generation models.
Considering the unpredictability of GenAI outputs and the models’ constant updates based on users’ practices (Burkhardt & Reider, 2024), we first conducted tests prompting various image-generation models to explore and compare results. Next, we defined the formulation of specified and underspecified prompts (see Figure 10). The former reproduces the original prompt by the Brazilian deputy so that we could assess how nine GenAI apps respond to it. The latter adapts the original prompt into a broader one, reducing it to its main keywords for a deeper scrutiny of the models’ responses: “A movie poster starring a Black woman”.
Making image collections is the second step. We generated 30 images for each prompt using Artflow (Wojcicki, 2020), Bing AI (Microsoft, 2023), BlueWillow (Limewire, 2023), Craiyon (Dayma, 2022), Dall-E 2 (Ramesh et al., 2022), Dream Studio (Rombach et al., 2021), Lexica (Shameemm, 2022), RunwayML (Valenzuela et al., 2018), and Stability.ai (Mostaque, 2019). We paid US$ 40 to generate images with Dall-E 2, Lexica, and Stability.ai. Methods for visualizing and analyzing the generated 540 images are the third step. We built networks to examine patterns, similarities, and specific characteristics in portraying Black women across GenAI apps and their responses to the same prompts. Additionally, we arranged images according to each GenAI app’s output, prompt, and image hue. Both methods, separated and complementary to each other, helped us identify racial stereotypes responsive to our specified and underspecified prompts.
The methods employed in this study can serve as a template for investigating and addressing various social issues by repurposing GenAI outputs and examining generative models’ responses. They can be adapted for other projects exploring different societal concerns.
5.3 Findings: Visual Biases in AI-Generated Content
Overall, Bing AI is the only model displaying images of Black women associated with violence and guns, with four instances featuring guns in its underspecified prompt images. A lack of body diversity was detected, with a recurring pattern depicting Black women as young and slender. Another prevalent stereotype pertains to facial expressions, where a serious, angry, or intense gaze is commonly attributed to Black women. When it comes to generative visual models, such as those used by Craiyon, results show they were not trained to capture specific cultural and contextual nuances, such as accurately representing a Brazilian “favela”. In Brazil, favelas are urban environments inhabited by low-income communities and are often associated with social challenges such as poverty, violence, and lack of adequate infrastructure. This result exposes the need to include cultural sensitivity and proper training to ensure that GenAI models accurately capture and represent various social and cultural contexts. Below, we present detailed findings based on the research questions.
RQ1: “How do different visual generative methods respond to the same prompt?”
The network vision analysis revealed that most models respond similarly to both prompts. Images generated by Bing AI, BlueWillow, DreamStudio, Lexica, RunwayML, and Stability.ai were positioned by ForceAtlas2 (Jacomy et al., 2014) in the centre of both networks. Therefore, these models’ images were tagged mainly with the same labels/web entities by Google Vision AI, i.e., respond to prompts with the same image styles. With the specified prompt (Figure 11), these models generated 3D cartoon images in a medium close-up shot portraying a Black woman facing forward with afro hair in vibrant attire against colourful backgrounds. For the underspecified prompt (Figure 12), these models mainly generated medium close-up shot images of Black women facing forward with afro hair, this time with a photography style and orange/brown colour palettes.
Three models stood out of the majority when responding to the prompts, with their images placed in the periphery of both networks. Craiyon only generated solid colour backgrounds in both prompts; therefore, its imagery was mainly tagged with labels/web entities related to facial features and hairstyles. That is significant for the specified prompt because it explicitly asked for a favela background, to which Craiyon did not respond. Artflow’s images, however, tend to have more complex backgrounds than the other models’ outputs. Because of that, in both prompts, its images look more like a film frame than a movie poster. Finally, Dall-E 2 generated 2D cartoon images in both prompts, showing an imperfect reproduction of the Pixar-like style demanded in the specified prompt and distancing its imagery from other models that mainly generated photographic images when responding to the underspecified one.
RQ2: “Do other Generative AI models also generate similar or different racial stereotypes, such as associating Black people with violence? If they do, what are the specific characteristics of these stereotypes?”
The image grid visualization (Figure 13) facilitated the identification of racial stereotypes. In line with feminist media studies, which highlight the women-as-sign trope — where women’s bodies are used as icons symbolically representing specific communities (Báez, 2023) — GenAI image models often rely on stereotypes when depicting Black women. This limited representation supports and perpetuates oppression against this community (hooks, 1992). For example, Bing AI generated images depicting black women with guns; there were four of them. Additionally, all models exhibited varying degrees of other stereotypes.
Craiyon is the only model that doesn’t represent the favela as a busy, dirty, poor place because it only generates images with solid colours in the background. All models lacked diversity in body types, with no images of older women and only one plus-size woman, a result aligned with the fetishization of black women’s bodies identified by other researchers (see Noble 2018; 2013). Children and teenagers only appear in the images generated by the specified prompts. There was also a lack of diversity in hairstyles, with Bing AI showing the most variation, albeit primarily in images from the underspecified prompt. Finally, images depicting black women smiling were mainly associated with the specified prompt. At the same time, those without the Pixar-inspired input often featured stern or angry expressions, with Dall-E displaying this stereotype prominently.
6 Conclusion, Challenges, Provocations
In this essay, we introduced the AI Methodology Map and its theoretical principles, system of methods, and applications in educational and research settings. The map theoretically covers perspectives and discussions on empirical engagement with GenAI in the Social Sciences and Humanities. As an external object, it is materialized as teaching material and a pedagogical tool for exploring GenAI Apps in the context of digital methods research. While we expect the reader to take these perspectives together, they can also serve separate purposes if desired. The map’s conceptual framework combines the practical and theoretical foundations of digital methods, visual thinking, and the documentation of data practices, along with interdisciplinary research. As a pedagogical resource and theoretical framework, the map integrates three interconnected methods and a technicity perspective that elicit attitudes of making room for, repurposing, and designing projects with and about GenAI. The expected results are geared toward developing a specific mindset required to advance digital methods research rather than “how-to” steps to achieve a specific research outcome, differentiating the methodology map from a method protocol or recipe. Moreover, the map’s teaching guidelines highlight a fundamental aspect present in digital methods; practical, technical, and theoretical modes of reasoning interrelate with each other, not just occasionally but essentially (Omena, 2021a; 2022). Its application, thus, is an invitation to understand GenAI from uncomplicated and technical perspectives while thinking about how we can use its outputs as research material or objects of critique. As opposed to educational concerns in social research and higher education institutions, which focus on developing methodologies to neutralize bias or disclose ethical issues and misuses of GenAI — quick responses to the AI impact. In this sense, the map contributes to building literacy in GenAI by diminishing the gap of what Mercedez Bunz (2022) has addressed as a moment of a profound human misunderstanding of AI cultures.
We argued that implementing the AI Methodology Map can open up applied scenarios that account for and repurpose GenAI for social research. The map, thus, functions as a theoretical framework and a pedagogical resource (interactive toolkit and teaching material), bridging theoretical and empirical engagement with GenAI. However, there are limitations and challenges to consider. First, the three applications of the map — not a final product as it can be expanded — serve as just a starting point for both understanding in practice the potential of GenAI as a research method, an object of experimentation and reflecting on the epistemology of digital methods. Second, without the skills to work with GenAI open-source code platforms and global information tracker repositories, the ease of access to generative methods relies on the AI market. It is never a problem if you pay for it. Consequently, the absence of free trial credits to GenAI models directly influenced the student’s decisions to work with specific generative methods (e.g. text and image over audio and video generation). Limiting, then, method creativity and practice. Lastly, during the workshops, more attention could have been paid to the role of foundation LLM models and the dominant models in the AI market. Likewise, despite our efforts to explain prompts and their importance, workshop participants have not paid much attention to their role in implementing the AI Methodology Map. Effective prompting techniques were refined and implemented outside of the workshop contexts. This was possible when students were given a project assignment and extra time (weeks) to develop a digital methods project with GenAI.
While this essay has illuminated the potential applications of the AI Methodology Map, such as its theoretical points as a principle of orientation into engaging with GenAI, its use in creating image collections to scrutinize AI generative models and uncover inherent bias in their training datasets, we conclude by addressing three provocations. Regarding accessing AI methods for research purposes, there are some aspects that the history of web API creation, maintenance, discontinuation, and closure have taught us. From freely and almost unlimited access to limitedly requested access according to project themes and institutions’ (or scholars’) prestige to finally having no other option than paying for it. Social media and Vision AI APIs are exemplary cases17. If advancing digital methods comes with a cost, will we be willing to pay to access GenAI models? Should we ask questions about which models are worthwhile and why? Or aren’t we just replacing the old consumption impulse to access large amounts of social media data by generating content with GenAI and running models for comparison studies? The second provocation refers to repurposing GenAI with digital methods. It is already acknowledged in the AI community that generative AI methods “are essentially projecting a single worldview, instead of representing diverse cultures or visual identities”18 (Luccioni, in an interview for Nicoletti & Bass, 2023). If all AI models have inherent biases, should we continue to identify gaps, lacks, or absences in GenAI by developing methods based on testing and experimenting with prompt modifications? Or, should we take a step back, slow down, and make room for properly learning about prompt techniques and the models themselves?
Lastly, and for the future of digital methods, to what extent are we moving towards developing more methods for dealing with GenAI data outputs, opening up a new agenda for prompting methods? For example, are we developing methods to access a foundation model’s internal “knowledge space” (see Burkhardt & Rieder, 2024), where user data is no longer centred or has become secondary? We also have learned that conventional digital methods can be transferred to analyzing GenAI outputs. For instance, telling us what we already expect, AI models are trained differently; therefore they carry unique forms of discrimination, e.g. Microsoft Bing AI associates black women with violence, generating them holding a gun. How will we employ a conceptual, technical, and empirical understanding of GenAI to think about new ways to design and implement methods?
We anticipate that the AI Methodology Map’s reproducibility will spur further discussions, extending the conversation we have initiated here.
References
Abid, A., Abdalla, A., Abid, A., Khan, D., Alfozan, A., & Zou, J. (2019). Gradio: Hassle-Free Sharing and Testing of ML Models in the Wild. 2019 ICML Workshop on Human in the Loop Learning. arXiv, 1906.02569. https://doi.org/10.48550/arXiv.1906.02569
Agência Lusa. (2023). Universidades de Portugal, Brasil e Espanha juntam-se para discutir impacto e “transição digital” como resposta aos novos “desafios. Observador, 8 November. https://observador.pt/2023/11/08/universidades-de-portugal-brasil-e-espanha-juntam-se-para-discutir-impacto-e-transicao-digital-como-resposta-aos-novos-desafios/
Amietta, R., Matos, A.F.N., & Guilbault, A. (2023). DEW. https://nerd-life-squad.github.io/about
Anderson, L.W., & Krathwohl, D.R. (2001). A Taxonomy for Learning, Teaching and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives: Complete Edition. New York, NY: Longman.
Anderson, C., Heinisch, J.S., Deldari, S., Salim, F. D., Ohly, S., David, K., & Pejovic, V. (2023). Toward Social Role-based Interruptibility Management. IEEE Pervasive Computing, 22(1), 59–68. https://doi.org/10.1109/mprv.2022.3229905
Antolak-Saper, N., Beilby, K., Boniface, B., Bui, D., Burgess, P., Cheema, A., Crocco, M., Fordyce, R., Galbraith, K., Lansdell, G., Lim, C., Moore, J., Nathania, A., Nawaz, S., Raveendran, L., Saha, T., Sapsed, C., Shannon, B., Soh, K., Swiecki, Z., Vu, T., Wagstaff, P., Wallingford, E., Wong, P., & Zaid, F. (2023). Guides for Assessment Re(design) and Reform. AI in Education Learning Circle. https://www.ai-learning-circle-mon.com/
Arnheim, R. (1980). A Plea for Visual Thinking. Critical Inquiry, 6(3), 489–497. https://doi.org/10.1086/448061
Arnheim, R., & Grundmann, U. (2001). The Intelligence of Vision: An Interview with Rudolf Arnheim. Cabint Magazine, 26 April. https://www.cabinetmagazine.org/issues/2/grundmann_arnheim.php
Báez, J.M. (2023). Performing Representational Labor: Blackness, Indigeneity, and Legibility in Global Latinx Media Cultures. Feminist Media Studies, 23(5), 2455–2470. https://doi.org/10.1080/14680777.2022.2056755
Baidoo-Anu, D., & Ansah, L.O. (2023). Education in the Era of Generative Artificial Intelligence (AI): Understanding the Potential Benefits of ChatGPT in Promoting Teaching and Learning. Journal of AI, 7(1), 52–62. https://doi.org/10.61969/jai.1337500
Banh, L., & Strobel, G. (2023). Generative artificial intelligence. Electronic Markets, 33(1). https://doi.org/10.1007/s12525-023-00680-1
Bastian, M., Heymann, S., & Jacomy, M. (2009). Gephi: An Open Source Software for Exploring and Manipulating Networks. Proceedings of the International AAAI Conference on Web and Social Media, 3(1), 361–362. https://doi.org/10.1609/icwsm.v3i1.13937
Birhane, A. (2022). The Unseen Black Faces of AI Algorithms. Nature, 610(7932), 451–452. https://doi.org/10.1038/d41586-022-03050-7
Boiret, G. (2016). PhantomBuster. [Software]. https://phantombuster.com/
Borra, E. (2023). ErikBorra/PromptCompass (v0.4). Zenodo. https://doi.org/10.5281/zenodo.10252681
Borra, E. (2024). The Medium Is the Methods: Using Large Language Models (LLMs) in Digital Research. [Keynote]. Digital Methods Winter School, University of Amsterdam, Amsterdam, The Netherlands.
Botta, M., Autuori, A., Subet, M., Terenghi, G., Omena, J.J., Leite, E., Kim, F.C. (2024). Designing With: A New Educational Module to Integrate Artificial Intelligence, Machine Learning and Data Visualization in Design Curricula. https://designingwithai.ch/
Bounegru, L., Gray, J., Venturini, T., & Mauri, M. (Eds.). (2018). A Field Guide to ‘Fake News’ and Other Information Disorders. Public Data Lab. https://doi.org/10.2139/ssrn.3097666
Bunz, M. [GoetheUK]. (2022). The Culture of Artificial Intelligence. Goethe Annual Lectures at the Goethe-Institut London. [Video]. YouTube, 24 November. https://www.youtube.com/watch?v=bTR6EP34W_w
Buolamwini, J.A. (2017). Gender Shades: Intersectional Phenotypic and Demographic Evaluation of Face Datasets and Gender Classifiers. [Doctoral dissertation, University of Missouri]. Massachusetts Institute of Technology, Cambridge.
Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of Machine Learning Research, 81, 77–91.
Burkhardt, S., & Rieder, B. (2024). Foundation Models are Platform Models: Prompting and the Political Economy of AI. Big Data & Society, 11(2). https://doi.org/10.1177/20539517241247839
Castro, J.C.M., & Shumsher, S. (2023). Situating Gen-AI Pain & Pleasure: Interpretative Querying Approach Combining Situational Analysis with Digital Methods [Presentation slides]. Faculdade de Ciências Sociais e Humanas, Universidade NOVA de Lisboa. http://dx.doi.org/10.13140/RG.2.2.16436.67201
Chao, J. (2021). Memespector GUI: Graphical User Interface Client for Computer Vision APIs (Version 0.2.5 beta). [Software]. https://github.com/jason-chao/memespector-gui
Chauhan, A., Anand, T., Jauhari, T., Shah, A., Singh, R., Rajaram, A., & Vanga, R. (2024). Identifying Race and Gender Bias in Stable Diffusion AI Image Generation. 2024 IEEE 3rd International Conference on AI in Cybersecurity (ICAIC), 1–6. https://doi.org/10.1109/ICAIC60265.2024.10433840
Ciston, S. (2023). A Critical Field Guide for Working with Machine Learning Datasets. https://knowingmachines.org/critical-field-guide
Colombo, G., De Gaetano, C., & Niederer, S. (2023). Prompting For Biodiversity: Visual Research With Generative AI. Digital Methods Summer School 2023. https://wiki.digitalmethods.net/Dmi/PromptingForBiodiversity
Cross, N. (2001). Designerly Ways of Knowing: Design Discipline versus Design Science. Design Issues, 17(3), 49–55. https://doi.org/10.1162/074793601750357196
Dąbkowski, P., & Staniszewski, M. (2022). ElevenLabs. https://elevenlabs.io/
Dayma, B. (2022). Crayon (v3). https://www.craiyon.com/
de Seta, G., Pohjonen, M., & Knuutila, A. (2023). Synthetic Ethnography: Field Devices for the Qualitative Study of Generative Models. SocArXiv. https://doi.org/10.31235/osf.io/zvew4
Dove, G., Halskov, K., Forlizzi, J. & Zimmerman, J. (2017). UX Design Innovation: Challenges for Working with Machine Learning as a Design Material. Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (pp. 278–288). ACM Press. https://dl.acm.org/doi/10.1145/3025453.3025739
Duguay, S., & Gold-Apel, H. (2023). Stumbling Blocks and Alternative Paths: Reconsidering the Walkthrough Method for Analyzing Apps. Social Media + Society, 9(1). https://doi.org/10.1177/20563051231158822
Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A., Baabdullah, A.,M., Koohang, A., Raghavan, V., Ahuja,M., Albanna, H., Albashrawi, M.A., Al-Busaidi, A.S., Balakrishnan, J., Barlette, Y., Basu, S., Bose, I., Brooks, L., Buhalis, D., Carter, L., Chowdhury, S., Crick, T., Cunningham, S.W., Davies, G.H., Davison, R.M., Dé, R., Dennehy, D., Duan, Y., Dubey, R., Dwivedi, R., Edwards, J.S., Flavián, C., Gauld, R., Grover, V., Hu, M.-C., Janssen, M., Jones, P., Junglas, I., Khorana, S., Kraus, S., Larsen, K.R., Latreille, P., Laumer, S., Malik, F.T., Mardani, A., Mariani, M., Mithas, S., Mogaji, E., Nord, J.H., O’Connor, S., Okumus, F., Pagani, M., Pandey, N., Papagiannidis, S., Pappas, I.,O., Pathak, N., Pries-Heje, J., Raman, R., Rana, N.P., Rehm, S.-V., Ribeiro-Navarrete, S., Richter, A., Rowe, F., Sarker, S., Carsten Stahl, S., Kumar Tiwari, M., van der Aalst, W., Venkatesh, V., Viglia, G., Wade, M., Walton, P., Wirtz, J., & Wright, R. (2023). “So What if ChatGPT Wrote it?” Multidisciplinary Perspectives on Opportunities, Challenges and Implications of Generative Conversational AI for Research, Practice and Policy. International Journal of Information Management, 71, 102642. https://doi.org/10.1016/j.ijinfomgt.2023.102642
Edkie, A., Pandey, D., & Roy, S. (2020). Murf.AI. https://murf.ai/
Farooq, M., Buzdar, H. Q. & Muhammad, S. (2023). AI-Enhanced Social Sciences: A Systematic Literature Review and Bibliographic Analysis of Web of Science Published Research Papers. Pakistan Journal of Society, Education and Language (PJSEL), 10(1), 250–267.
Ferrara, E. (2024). Fairness and Bias in Artificial Intelligence: A Brief Survey of Sources, Impacts, and Mitigation Strategies. Sci, 6(1). https://doi.org/10.3390/sci6010003
Franklin, U. (1990). The Real World of Technology. Toronto: CBC.
García-Peñalvo, F., & Vázquez-Ingelmo, A. (2023). What Do We Mean by GenAI? A Systematic Mapping of The Evolution, Trends, and Techniques Involved in Generative AI. International Journal of Interactive Multimedia and Artificial Intelligence, 8(4), 7. https://doi.org/10.9781/ijimai.2023.07.006
Gaspar, B. (2023). Cientistas divulgam 10 diretrizes para a Educação lidar com a Inteligência Artificial. Fepesp - Federação Dos Professores Do Estado de São Paulo. https://fepesp.org.br/noticia/cientistas-divulgam-10-diretrizes-para-a-educacao-lidar-com-a-inteligencia-artificial/
Google Creative Lab (2017). Teachable Machine. [software]. https://teachablemachine.withgoogle.com/
Gorska, A.M., & Jemielniak, D. (2023). The Invisible Women: Uncovering Gender Bias in AI-generated Images of Professionals. Feminist Media Studies, 23(8), 4370–4375. https://doi.org/10.1080/14680777.2023.2263659
Goulart, J. (2024). Silvio Meira: ‘Estamos na era da pedra lascada da IA, mas o futuro chega em 800 dias’. Brazil Journal, 16 March. https://braziljournal.com/silvio-meira-estamos-na-era-da-pedra-lascada-da-ia-mas-o-futuro-chega-em-800-dias/
Gray, J., Bounegru, L., Rogers, R., Venturini, T., Ricci, D., Meunier, A., Mauri, M., Niederer, S., Sánchez Querubín, N., Tuters, M., Kimbell, L., & Munk, K. (2022). Engaged Research-led Teaching: Composing Collective Inquiry with Digital Methods and Data. Digital Culture & Education, 14(3), 55–86.https://www.digitalcultureandeducation.com/volume-14-3
Graziani, M., Dutkiewicz, L., Calvaresi, D., Amorim, J. P., Yordanova, K., Vered, M., Nair, R., Henriques Abreu, P., Blanke, T., Pulignano, V., Prior, J.O., Lauwaert, L., Reijers, W., Depeursinge, A., Andrearczyk, V., & Müller, H. (2023). A Global Taxonomy of Interpretable AI: Unifying the Terminology for the Technical and Social Sciences. Artificial Intelligence Review, 56(4), 3473–3504. https://link.springer.com/article/10.1007/s10462-022-10256-8
Greene, C. (2023). AI and the Social Sciences: Why All Variables are Not Created Equal. Res Publica, 29(2), 303–319. https://doi.org/10.1007/s11158-022-09544-5
Hartman, J., Schwenzow, J., & Witte, M. (2023). The Political Ideology of Conversational AI: Converging Evidence on ChatGPT’s Pro-environmental, Left-libertarian Orientation. arXiv. https://doi.org/10.2139/ssrn.4316084
Hoel, A. S. (2012). Technics of Thinking. In A.S. Hoel & I. Folkvord (Eds.), Ernst Cassirer on Form and Technology: Contemporary Readings (pp. 65–91). London: Palgrave Macmillan.
Honig, C., Rios, S., & Oliveira, E. (2023). A Tool for Learning: Classroom Use-cases for Generative AI. The Chemical Engineer, 1 June. https://www.thechemicalengineer.com/features/a-tool-for-learning-classroom-use-cases-for-generative-ai/
Hooks, B. (1992). Black Looks: Race and Representation. Boston, MA: End Press.
Jacomy, M., Venturini, T., Heymann, S., & Bastian, M. (2014). ForceAtlas2, a Continuous Graph Layout Algorithm for Handy Network Visualization Designed for the Gephi Software. PLoS ONE, 9(6), e98679. https://doi.org/10.1371/journal.pone.0098679
Kendall, A., Grimes, M., & Cipolla, R. (2016). PoseNet: A Convolutional Network for Real-Time 6-DOF Camera Relocalization. arXiv. https://doi.org/10.1109/ICCV.2015.336
Koplin, J.J. (2023). Dual-use Implications of AI text Generation. Ethics and Information Technology, 25(2), 32. https://doi.org/10.1007/s10676-023-09703-z
Leshkevich, T., & Motozhanets, A. (2022). Social Perception of Artificial Intelligence and Digitization of Cultural Heritage: Russian Context. Applied Sciences, 12(5), 2712. https://doi.org/10.3390/app12052712
Limewire. (2023). BlueWillow. [Software]. https://www.bluewillow.ai/
Luccioni, A.S., Akiki, C., Mitchell, M., & Jernite, Y. (2023). Stable Bias: Evaluating Societal Representations in Diffusion Models. In A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, & S. Levine (Eds.), Advances in Neural Information Processing Systems (pp. 56338–56351). New York, NY: Curran Associates.
Maier, N., Parodi, F., & Verna, S. (2004). DownThemAll! (v4.12.1). [Web browser plugin]. https://www.downthemall.org/
Manovich, L. (2013). Museum Without Walls, Art History Without Names: Methods and Concepts for Media Visualization. In C. Vernallis, A. Herzog & J. Richardson (Eds.), The Oxford Handbook of Sound and Image in Digital Media (pp. 252–278). Oxford: Oxford University Press. https://doi.org/10.1093/oxfordhb/9780199757640.013.005
Manovich, L. (2020). Cultural Analytics. Cambridge, MA: MIT Press. https://doi.org/10.7551/mitpress/11214.001.0001
Marres, N. (2017). Digital Sociology: The Reinvention of Social Research. London: Wiley.
Mauri, M. & Ciuccarelli, P. (2016). Designing Diagrams for Social Issues. Future Focused Thinking - DRS International Conference 2016. https://doi.org/10.21606/drs.2016.185
Mauri, M., Briones, M.A., Gobbo, B. & Colombo, G. (2020). Research Protocol Diagrams as Didatic Tools to Act Critically in Dataset Design Processes. INTED2020 Proceedings, (pp. 9034–9043). https://doi.org/10.21125/inted.2020.2470
Microsoft. (2023). Bing Image Creator. https://www.bing.com/images/create
Midjourney Inc. (2022). Midjourney (Version 5.2). https://www.midjourney.com/
Mostaque, E. (2019). Stability.ai. https://stability.ai/
Nicoletti, L., & Bass, D. (2023). Humans Are Biased. Generative AI Is Even Worse. Bloomberg, 9 June. https://www.bloomberg.com/graphics/2023-generative-ai-bias/
Noble, U.S. (2013). Google Search: Hyper-visibility as a Means of Rendering Black Women and Girls Invisible. InVisible Culture, 19. https://doi.org/10.47761/494a02f6.50883fff
Noble, S.U. (2018). Algorithms of Oppression. New York, NY: New York University Press. https://doi.org/10.18574/nyu/9781479833641.001.0001
Omena, J.J. (2021a). Digital Methods and Technicity-of-the-Mediums. From Regimes of Functioning to Digital Research. [Doctoral Dissertation, Universidade NOVA de Lisboa]. http://hdl.handle.net/10362/127961
Omena, J.J., Pilipets, E., Gobbo, B., & Chao, J. (2021b). The Potentials of Google Vision API-based Networks to Study Natively Digital Images. Revista Diseña, 19. https://doi.org/10.7764/disena.19.article.1
Omena, J.J. (2022). Technicity-of-the-mediums. In A. Ceron (Ed.), Elgar Encyclopedia of Technology and Politics (pp. 77–81). Cheltenham: Elgar.
OpenAI. (2023). ChatGPT. Large Language Model. OPenAI. https://chat.openai.com
OpenAI, Achiam, J., Adler, S., Agarwal, S., Ahmad, L., Akkaya, I., Leoni Aleman, F., Almeida, D., Altenschmidt, J., Altman, S., Anadkat, S., Avila, R., Babuschkin, I., Balaji, S., Balcom, V., Baltescu, P., Bao, H., Bavarian, M., Belgum, J., Bello, I., Berdine, J., Bernadett-Shapiro, G., Berner, C., Bogdonoff, L., Boiko, O., Boyd, M., Brakman, A.L., Brockman, G., Brooks, T., Brundage, M., Button, K., Cai, T., Campbell, R., Cann, A., Carey, B., Carlson, C., Carmichael, R., Chan, R., Chang, C., Chantzis, F., Chen, D., Chen, S., Chen, R., Chen, J., Chen, M., Chess, B., Cho, C., Chu, C., Won Chung, H., Cummings, D., Currier, J., Dai, Y., Decareaux, C., Degry, T., Deutsch, N., Deville, D., Dhar, A., Dohan, D., Dowling, S., Dunning, S., Ecoffet, A., Eleti, A., Eloundou, T., Farhi, D., Fedus, L., Felix, N., Posada Fishman, S., Forte, J., Fulford, I., Gao, L., Georges, E., Gibson, C., Goel, V., Gogineni, T., Goh, G., Gontijo-Lopes, R., Gordon, J., Grafstein, M., Gray, S., Greene, R., Gross, J., Shane Gu, S., Guo, Y., Hallacy, C., Han, J., Harris, J., He, Y., Heaton, M., Heidecke, J., Hesse, C., Hickey, A., Hickey, W., Hoeschele, P., Houghton, B., Hsu, K., Hu, S., Hu, X., Huizinga, J., Jain, S., Jain, S., Jang, J., Jiang, A., Jiang, R., Jin, H., Jin, D., Jomoto, S., Jonn, B., Jun, H., Kaftan, T., Kaiser, Ł., Kamali, A., Kanitscheider, I., Shirish Keskar, N., Khan, T., Kilpatrick, L., Wook Kim, J., Kim, C., Kim, Y., Kirchner, J.H., Kiros, J., Knight, M., Kokotajlo, D., Kondraciuk, Ł., Kondrich, A., Konstantinidis, Kosic, K., Krueger, G., Kuo, V., Lampe, M., Lan, I., Lee, T., Leike, J., Leung, J., Levy, A.D., Ming Li, C., Lim, R., Lin, M., Lin, S., Litwin, M., Lopez, T., Lowe, R., Lue, P., Makanju, A., Malfacini, K., Manning, S., Markov, T., Markovski,Y., Martin, B., Mayer, K., Mayne, A., McGrew, B., Mayer McKinney, S., McLeavey, C., McMillan, P., McNeil, J., Medina, D., Mehta, A., Menick, J., Metz, L., Mishchenko, A., Mishkin, P., Monaco, V., Morikawa, E., Mossing, D., Mu, T., Murati, M., Murk, O., Mély, D., Nair, A., Nakano, R., Nayak, R., Neelakantan, A., Ngo, R., Noh, H., Ouyang, L., O’Keefe, C., Pachocki, J.,Paino, A., Palermo, J., Pantuliano, A., Parascandolo, G., Parish, J., Parparita, E., Passos, A., Pavlov, M., Peng, A., Perelman, A., de Avila Belbute Peres, F., Petrov, M., Ponde de Oliveira Pinto, H., Rai Pokorny, .M., Pokrass, M., Pong, V.,H., Powell, T., Power, A., Power, B., Proehl, E., Puri, R., Radford, A., Rae, J., Ramesh, A., Cameron Raymond, Real, F., Rimbach, K., Ross, C., Rotsted, B., Roussez, H., Ryder, N., Saltarelli, M., Sanders, T., Santurkar, S., Sastry, G., Schmidt, H., Schnurr, D., Schulman, J., Selsam, D., Sheppard, K., Sherbakov, T., Shieh, J., Shoker, S., Shyam, P., Sidor, S., Sigler, E., Simens, M., Sitkin, J., Slama, K., Sohl, I., Sokolowsky, B., Song, Y., Staudacher, N., Such, F.P., Summers, N., Sutskever, I., Tang, J., Tezak, N., Thompson, M.B., Tillet, P., Tootoonchian, A., Tseng, E., Tuggle, P., Turley, N., Tworek, J., Cerón Uribe, F.J., Vallone, A., Vijayvergiya, A., Voss, C., Wainwright, C., Wang, J.J., Wang, A., Wang, B., Ward, J., Wei, J., Weinmann, C.J., Welihinda, A., Welinder, P., Weng, J., Weng, L., Wiethoff, M., Willner, D., Winter, C., Wolrich, S., Wong, H., Workman, L., Wu, S., Wu, J., Wu, M., Xiao, K., Xu, T., Yoo, S., Yu, K., Yuan, Q., Zaremba, W., Zellers, R., Zhang, C., Zhang, M., Zhao, S., Zheng, T., Zhuang, J., Zhuk, W., & Zoph, B. (2024). GPT-4 Technical Report. arXiv. http://arxiv.org/abs/2303.08774
Pask, G. (1975). Minds and Media in Education and Entertainment: Some Theoretical Comments Illustrated by the Design and Operation of a System for Exteriorizing and Manipulating Individual Theses. In R. Trappl & G. Pask (Eds.), Progress in Cybernetics and System Research (pp. 38–50). London: Hemisphere.
Peeters, S. (2023). Zeeschuimer (Version 1.4). [Firefox plugin]. https://doi.org/10.5117/CCR2022.2.007.HAGE
Perez, J., Castro, M., & Lopez, G. (2023). Serious Games and AI: Challenges and Opportunities for Computational Social Science. IEEE Access, 11, 62051–62061. https://doi.org/10.1109/ACCESS.2023.3286695
Peeters, S., & Hagen, S. (2022). The 4CAT Capture and Analysis Toolkit: A Modular Tool for Transparent and Traceable Social Media Research. Computational Communication Research, 4(2), 571–589. https://computationalcommunication.org/ccr/article/view/120
Popescu, A., & Schut, A. (2023). Generative AI in Creative Design Processes: aDive into Possible Cognitive Biases. In D. De Sainz Molestina, L. Galluzzo, F. Rizzo & D. Spallazzo (Eds.), IASDR 2023: Life-Changing Design (pp. 1–10). https://doi.org/10.21606/iasdr.2023.784
Ramesh, A., Dhariwal, P., Nichol, A., Chu, C., & Chen, M. (2022). Hierarchical Text-Conditional Image Generation with CLIP Latents. arXiv. http://arxiv.org/abs/2204.06125
Rieder, B. (2020). Engines of Order: a Mechanology of Algorithmic Techniques. Amsterdam: Amsterdam University Press.
Rombach, R., Blattmann, A., Lorenz, D., Esser, P. & Ommer, B. (2021). High-Resolution Image Synthesis with Latent Diffusion Models. arXiv. https://doi.org/10.1109/CVPR52688.2022.01042
Rogers, R. (2013). Digital Methods. Cambridge, MA: MIT Press.
Rogers, R. & Lewthwaite, S. (2019). Teaching Digital Methods: Interview. with Richard Rogers. Revista Diseña, 14, 12–37. https://doi.org/10.7764/disena.14.12-37
Rogers, S., & Cairo, A. (2022). TwoTone. https://twotone.io/
Rozado, D. (2023). The Political Biases of ChatGPT. Social Sciences, 12(3), 148. https://doi.org/10.3390/socsci12030148
Russell Group. (2023). New Principles on Use of AI in Education. The Russell Group, 4 June. https://russellgroup.ac.uk/news/new-principles-on-use-of-ai-in-education/
Salvaggio, E. (2022). How to Read an AI Image. Cybernetic Forests, 2 October. https://www.cyberneticforests.com/news/how-to-read-an-ai-image
Shameem, S. (2022). Lexica AI. https://lexica.art/
Sharma, S. (2024). Understanding Digital Racism: Networks, Algorithms, Scale. Lanham, MD:Rowman & Littlefield.
Shrestha, Y.R., von Krogh, G., & Feuerriegel, S. (2023). Building Open-Source AI. Nature Computational Science, 3, 908–911 http://dx.doi.org/10.2139/ssrn.4614280
Silva, T. (2023). Mapeamento de Danos e Discriminação Algorítmica. Desvelar. https://desvelar.org/casos-de-discriminacao-algoritmica/
Sinclair, D., Dowdeswell, T., & Goltz, N. (2023). Artificially Intelligent Sex Bots and Female Slavery: Social Science and Jewish Legal and Ethical Perspectives. Information & Communications Technology Law, 32(3), 328–355. https://doi.org/10.1080/13600834.2022.2154050
Sinclair, S., & Rockwell, G. (2003). Voyant Tools (v2.6.13). [Software]. https://voyant-tools.org/
Souza, R. [@renatasouzario]. (2023). Racismo nas plataformas de inteligência artificial! [Video]. Instagram, 25 October. https://www.instagram.com/reel/Cy1p6EQpwXB/?igshid=MzRlODBiNWFlZA%3D%3D
Stokel-Walker, C., & Van Noorden, R. (2023). What ChatGPT and Generative AI Mean for Science. Nature, 614(7947), 214–216. https://doi.org/10.1038/d41586-023-00340-6
Sun, L., Wei, M., Sun, Y., Suh, Y. J., Shen, L., & Yang, S. (2023). Smiling Women Pitching Down: Auditing Representational and Presentational Gender Biases in Image Generative AI. arXiv. https://doi.org/10.1093/jcmc/zmad045
The DigiKam Team. (2001). digiKam (v8.3.0). [Software]. https://www.digikam.org/
Touvron, H., Martin, L., Stone, K., Albert, P., Almahairi, A., Babaei, Y., Bashlykov, N., Batra, S., Bhargava, P., Bhosale, S., Bikel, D., Blecher, L., Canton Ferrer, C., Chen, M., Cucurull, G., Esiobu, D., Fernandes, J., Fu, J., Fu, W., Fuller, B., Gao, C., Goswami, V., Goyal, N., Hartshorn, A., Hosseini, S., Hou, R., Inan, H., Kardas, M., Kerkez, V., Khabsa, M., Kloumann, I., Korenev, A., Singh Koura, P., Lachaux, M.-A., Lavril, T., Lee, J., Liskovich, D., Lu, Y., Mao, Y., Martinet, X., Mihaylov, T., Mishra, P., Molybog, M., Nie, Y., Poulton, A., Reizenstein, J., Rungta, R., Saladi, K., Schelten, A., Silva, R., Smith, E.M., Subramanian, R., Tan, X.E., Tang, B., Taylor, R., Williams, A., Kuan, J.X., Xu, P., Yan, Z., Zarov, I., Zhang, Y., Fan, A., Kambadur, M., Narang, S., Rodriguez, A., Stojnic, R., Edunov, S., & Scialom, T. (2023). Llama 2: Open Foundation and Fine-Tuned Chat Models. arXiv. http://arxiv.org/abs/2307.09288
Valenzuela, C., Matamala, A., & Germanidis, A. (2018). RunwayML. https://runwayml.com/
Visual Computing Group. (2018). Image Sorter (v4). [Software]. https://visual-computing.com/project/imagesorter/
Visual Crossing Corporation. (2003). Visual Crossing. https://www.visualcrossing.com/
Vogel, K.M. (2021). Big Data, AI, Platforms, and the Future of the U.S. Intelligence Workforce: A Research Agenda. IEEE Technology and Society Magazine, 40(3), 84–92. https://doi.org/10.1109/MTS.2021.3104384
Wang, F.-Y., Ding, W., Wang, X., Garibaldi, J., Teng, S., Imre, R., & Olaverri-Monreal, C. (2022). The DAO to DeSci: AI for Free, Fair, and Responsibility Sensitive Sciences. IEEE Intelligent Systems, 37(2), 16–22. https://doi.org/10.1109/MIS.2022.3167070
Wojcicki, A. (2020). Artflow AI. https://app.artflow.ai
Yu, C., Tschanz-Egger, J.L., & Souto, M. (2023). Tomato Girl Summer. Designing With: A New Educational Module to Integrate Artificial Intelligence, Machine Learning and Data Visualization in Design Curricula, 19 June. https://master-interaction-design.notion.site/Tomato-Girl-Summer-07dcf86e607e44d5b00b5d8cd9524a75
Zajko, M. (2021). Conservative AI and Social Inequality: Conceptualizing Alternatives to Bias through Social Theory. AI and Society, 36(3), 1047–1056. https://doi.org/10.1007/s00146-021-01153-9
The map is available at https://genmap.designingwithai.ch/map and documented at https://github.com/zumatt/AI-Methodology-Map. The AI Methodology Map integrates an experimental and multidisciplinary ongoing project, namely “Designing With: A New Educational Module to Integrate Artificial Intelligence, Machine Learning and Data Visualization in Design Curricula”. It is a research project in collaboration between the Institute of Design, SUPSI; the Universidade NOVA de Lisboa, iNOVA Media Lab, and the EPFL.↩︎
The term pedagogical refers to the theoretical-practical framework based on Bloom’s Taxonomy (Anderson & Krathwohl, 2001), reflecting the educational approaches, practices and purposes that should characterise education in the 21st century.↩︎
That is a piece of text or input provided to a GenAI model which directs and shapes the model’s response.↩︎
As Borra (2024) explains, “foundation models are (pre-)trained on massive data sets — and are mainly probabilistic completion machines. Fine-tuned models use foundation models as their basis, but have learned to do specific tasks such as classification, extraction and summarisation.”↩︎
The student sample was defined according to the author’s institutional affiliations and teaching agenda.↩︎
This workshop was developed in the context of research project “Designing With: A New Educational Module to Integrate Artificial Intelligence, Machine Learning and Data Visualization in Design Curricula” (Botta et al., 2024). It supported testing and validation of the “Designing With Interactive Framework”, accessible at the link https://designingwithai.ch/interactive-framework.↩︎
The “Designing With: AI, ML, DV” workshop included six students of the SUPSI Master of Arts in Interaction Design, four students of NOVA Master in New Media and Web Practice, two students of the NOVA Master of Science in Transition, Innovation, and Sustainability Environments, two students of the HEAD Master in Space and Communication, and three students of the EPFL Master in Architecture. The workshop was part of a broader research, founded by Movetia in 2021, entitled Designing With A New Educational Module to Integrate Artificial Intelligence, Machine Learning and Data Visualization in Design Curricula, in collaboration between the SUPSI Institute of Design, the Universidade NOVA de Lisboa and the EPFL (École polytechnique fédérale de Lausanne) Media x Design Lab. The website of the full project is accessible via https://designingwithai.ch/.↩︎
The AI Methodology Map, conceptualized before the workshop and as the inspiration for its modules, has since been further expanded with a specific focus on generative AI web applications for digital methods-led research.↩︎
Two broad options were suggested. Social research for mapping social, political, cultural, or environmental issues, or medium research-oriented project to interrogating generative methods via their outputs.↩︎
Regarding reproducibility, the network vision methodology was developed by Janna Joceli Omena and her collaborators (see Omena et al., 2021). This methodology is currently under formalization. The step-by-step process can be easily repeated by anyone, including those not familiar with digital methods, by following this document: https://docs.google.com/document/d/e/2PACX-1vR8IZJKni6j1tG8KE872LS8HsqBVe-PKSIlqVG5mMAfR7vUKTzmW_T9TPSe7mA-GVwr0LwMS5I96dbq/pub. Further discussion on these methods is available at Omena, 2021b.↩︎
https://www.instagram.com/reel/Cy1p6EQpwXB/?igshid=MzRlODBiNWFlZA↩︎
Additionally, the automation market service has, for instance, made researchers pay for these services to track, capture, and study social and political bots or analyse the impact of social media ads.↩︎
See also the work of Nicoletti & Bass (2023), Sun et al. (2023) and Popescu & Shut (2023) in how GenAI takes stereotypes and gender and cognitive biases.↩︎