<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.3 20210610//EN" "JATS-journalpublishing1-3.dtd">
<article article-type="research-article" dtd-version="1.3" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xml:lang="ru"><front><journal-meta><journal-id journal-id-type="publisher-id">steps</journal-id><journal-title-group><journal-title xml:lang="ru">Шаги/Steps</journal-title><trans-title-group xml:lang="en"><trans-title>Shagi / Steps</trans-title></trans-title-group></journal-title-group><issn pub-type="ppub">2412-9410</issn><issn pub-type="epub">2782-1765</issn><publisher><publisher-name>The Russian Presidential Academy of National Economy and Public Administration</publisher-name></publisher></journal-meta><article-meta><article-id custom-type="edn" pub-id-type="custom">RIDITQ</article-id><article-id custom-type="elpub" pub-id-type="custom">steps-1111</article-id><article-categories><subj-group subj-group-type="heading"><subject>Research Article</subject></subj-group><subj-group subj-group-type="section-heading" xml:lang="ru"><subject>КОНСТРУИРОВАНИЕ «ДРУГОГО» В МЕДИА</subject></subj-group></article-categories><title-group><article-title>Towards an ethical strategy for research data infrastructures: Digitalizing archives of historical hate</article-title><trans-title-group xml:lang="en"><trans-title>Towards an ethical strategy for research data infrastructures: Digitalizing archives of historical hate</trans-title></trans-title-group></title-group><contrib-group><contrib contrib-type="author" corresp="yes"><contrib-id contrib-id-type="orcid">https://orcid.org/0000-0002-5192-8007</contrib-id><name-alternatives><name name-style="eastern" xml:lang="ru"><surname>Krasni</surname><given-names>J. Z.</given-names></name><name name-style="western" xml:lang="en"><surname>Krasni</surname><given-names>J. Z.</given-names></name></name-alternatives><bio xml:lang="ru"><p>Ян Златкович Красни, PhD научный сотрудник, Департамент образования</p><p>Bunsenstr. 3, 35032, Marburg</p></bio><bio xml:lang="en"><p>Jan Zlatković Krasni, PhD Research Fellow, Department of Education</p><p>Bunsenstr. 3, 35032, Marburg</p></bio><email xlink:type="simple">jan.krasni@uni-marburg.de</email><xref ref-type="aff" rid="aff-1"/></contrib></contrib-group><aff-alternatives id="aff-1"><aff xml:lang="ru">Марбургский университет<country>Германия</country></aff><aff xml:lang="en">University of Marburg<country>Germany</country></aff></aff-alternatives><pub-date pub-type="collection"><year>2025</year></pub-date><pub-date pub-type="epub"><day>21</day><month>12</month><year>2025</year></pub-date><volume>11</volume><issue>4</issue><fpage>205</fpage><lpage>221</lpage><permissions><copyright-statement>Copyright &amp;#x00A9; Krasni J.Z., 2025</copyright-statement><copyright-year>2025</copyright-year><copyright-holder xml:lang="ru">Krasni J.Z.</copyright-holder><copyright-holder xml:lang="en">Krasni J.Z.</copyright-holder><license license-type="creative-commons-attribution" xlink:href="https://creativecommons.org/licenses/by/4.0/" xlink:type="simple"><license-p>This work is licensed under a Creative Commons Attribution 4.0 License.</license-p></license></permissions><self-uri xlink:href="https://steps.ranepa.ru/jour/article/view/1111">https://steps.ranepa.ru/jour/article/view/1111</self-uri><abstract><p>В статье рассматриваются этические сложности, связанные с разработкой инфраструктур исследовательских данных (Research Data Infrastructures, RDI) для оцифрованных архивов с акцентом на материалы, содержащие исторический контент, касающийся вражды. Рассматривается напряжение, с одной стороны, между принципами открытого доступа и массовой оцифровки, которые направлены на повышение доступности знаний, и, с другой стороны, с этическим императивом предотвратить распространение вредного контента, который может увековечить предвзятые идеологии или вредные стереотипы. Для решения этих проблем автор предлагает комплексную этическую стратегию, которая объединяет подход трансэпистемического осмысления с принципами организационного обучения (в понимании Organisations-pädagogik). В этой стратегии особое внимание уделяется сотрудничеству между различными заинтересованными сторонами – архивистами, исследователями, экспертами в области информационных технологий и заинтересованными сообществами – для создания как этически надежных, так и практически жизнеспособных решений. Выходя за пределы исключительно технических или правовых рамок, предложенный подход стремится сбалансировать доступность исторических документов для исследовательских целей и необходимость снижения рисков, связанных с распространением ненавистнического контента. В статье рассматриваются такие важные вопросы, как алгоритмическая предвзятость, которая может непреднамеренно усилить вредные стереотипы, и потенциал двойного назначения технологий искусственного интеллекта, когда технологии, созданные для повышения эффективности архивного дела, могут быть использованы не по назначению. Автор также обсуждает противоречие между принципами открытой науки и ограниченным доступом к чувствительным материалам, выступая за тонко настроенные модели контроля доступа как для человеческих пользователей, так и для систем искусственного интеллекта. Благодаря трансэпистемическому подходу стратегия поддерживает междисциплинарный диалог таким образом, чтобы инфраструктуры исследовательских данных были этичными, сохраняли исторические знания и защищали их от вреда, способствуя формированию стандартов ответственного цифрового архивирования.</p></abstract><trans-abstract xml:lang="en"><p>This paper explores the ethical complexities of developing research data infrastructures (RDIs) for digitalized archives, with a focus on materials containing historical hateful content. It examines the tension between the principles of open access and mass digitization, which aim to enhance knowledge accessibility, and the ethical imperative to prevent the dissemination of harmful content that could perpetuate biased ideologies or harmful stereotypes. The author proposes a comprehensive ethical strategy that integrates a trans-epistemic design approach with principles of organizational learning and development to address these challenges. This strategy emphasizes collaboration among diverse stakeholders – archivists, researchers, IT experts, and affected communities – to create solutions that are both ethically robust and practically viable. By moving beyond solely technical or legal frameworks, the approach seeks to balance the accessibility of historical records for research purposes with the need to mitigate risks associated with the spread of hateful content. The paper delves into critical issues such as algorithmic bias, which can inadvertently amplify harmful stereotypes, and the dual-use potential of AI, where technologies designed for archival efficiency might be misused. It also addresses the conflict between open science principles and restricted access to sensitive materials, advocating for nuanced access controls for both human users and AI systems. Through a trans-epistemic lens, the strategy fosters interdisciplinary dialogue to ensure RDIs serve as ethical infrastructures that preserve historical knowledge while safeguarding against harm, contributing to a framework for responsible digital archiving.</p></trans-abstract><kwd-group xml:lang="ru"><kwd>этическая стратегия</kwd><kwd>инфраструктуры исследовательских данных</kwd><kwd>цифровые архивы</kwd><kwd>ненавистнический контент (контент вражды)</kwd><kwd>алгоритмическая предвзятость</kwd><kwd>трансэпистемический подход к проектированию</kwd><kwd>открытый доступ</kwd><kwd>этика искусственного интеллекта</kwd></kwd-group><kwd-group xml:lang="en"><kwd>ethical strategy</kwd><kwd>Research Data Infrastructures (RDI)</kwd><kwd>digital archives</kwd><kwd>hateful content</kwd><kwd>algorithmic bias</kwd><kwd>trans-epistemic design approach</kwd><kwd>open access</kwd><kwd>AI ethics</kwd></kwd-group></article-meta></front><back><ref-list><title>References</title><ref id="cit1"><label>1</label><citation-alternatives><mixed-citation xml:lang="ru">Abramson, C. M., Li, Z., Prendergast, T., &amp; Sánchez-Jankowski, M. (2024). Inequality in the origins and experiences of pain: What “big (qualitative) data” reveal about social suffering in the United States. RSF: The Russell Sage Foundation Journal of the Social Sciences, 10(5), 34–65. https://doi.org/10.7758/RSF.2024.10.5.02.</mixed-citation><mixed-citation xml:lang="en">Abramson, C. M., Li, Z., Prendergast, T., &amp; Sánchez-Jankowski, M. (2024). Inequality in the origins and experiences of pain: What “big (qualitative) data” reveal about social suffering in the United States. RSF: The Russell Sage Foundation Journal of the Social Sciences, 10(5), 34–65. https://doi.org/10.7758/RSF.2024.10.5.02.</mixed-citation></citation-alternatives></ref><ref id="cit2"><label>2</label><citation-alternatives><mixed-citation xml:lang="ru">Agamben, G. (2009). What is an apparatus? and other essays. Stanford Univ. Press. https://doi.org/10.1515/9781503600041.</mixed-citation><mixed-citation xml:lang="en">Agamben, G. (2009). What is an apparatus? and other essays. Stanford Univ. Press. https://doi.org/10.1515/9781503600041.</mixed-citation></citation-alternatives></ref><ref id="cit3"><label>3</label><citation-alternatives><mixed-citation xml:lang="ru">Ascone, L., Becker, M. J., Bolton, M., Chapelan, A., Haupeltshofer, P., Krasni, J., Krugel, A., Mihaljević, H., Placzynta, K., Pustet, M., Scheiber, M., Steffen, E., Troschke, H., Tschiskale, V., &amp; Vincent, Ch. (2023). Decoding antisemitism: An AI-driven study on hate speech and imagery online. Technische Universität Berlin. Centre for Research on Antisemitism. https://doi.org/10.14279/depositonce-17105.</mixed-citation><mixed-citation xml:lang="en">Ascone, L., Becker, M. J., Bolton, M., Chapelan, A., Haupeltshofer, P., Krasni, J., Krugel, A., Mihaljević, H., Placzynta, K., Pustet, M., Scheiber, M., Steffen, E., Troschke, H., Tschiskale, V., &amp; Vincent, Ch. (2023). Decoding antisemitism: An AI-driven study on hate speech and imagery online. Technische Universität Berlin. Centre for Research on Antisemitism. https://doi.org/10.14279/depositonce-17105.</mixed-citation></citation-alternatives></ref><ref id="cit4"><label>4</label><citation-alternatives><mixed-citation xml:lang="ru">Chilcott, A. (2019). Towards protocols for describing racially offensive language in UK public archives. Archival Science, 19, 359–376. https://doi.org/10.1007/s10502-019-09314-y. Colavizza, G., Blanke, T., Jeurgens, C., &amp; Noordegraaf, J. (2021). Archives and AI: An overview of current debates and future perspectives. Journal on Computing and Cultural Heritage, 15(1), Article 4. https://doi.org/10.1145/3479010.</mixed-citation><mixed-citation xml:lang="en">Chilcott, A. (2019). Towards protocols for describing racially offensive language in UK public archives. Archival Science, 19, 359–376. https://doi.org/10.1007/s10502-019-09314-y. Colavizza, G., Blanke, T., Jeurgens, C., &amp; Noordegraaf, J. (2021). Archives and AI: An overview of current debates and future perspectives. Journal on Computing and Cultural Heritage, 15(1), Article 4. https://doi.org/10.1145/3479010.</mixed-citation></citation-alternatives></ref><ref id="cit5"><label>5</label><citation-alternatives><mixed-citation xml:lang="ru">Cushing, A. L., &amp; Osti, G. (2023). “So how do we balance all of these needs?”: How the concept of AI technology impacts digital archival expertise. Journal of Documentation, 79(7), 12–29. https://doi.org/10.1108/JD-08-2022-0170.</mixed-citation><mixed-citation xml:lang="en">Cushing, A. L., &amp; Osti, G. (2023). “So how do we balance all of these needs?”: How the concept of AI technology impacts digital archival expertise. Journal of Documentation, 79(7), 12–29. https://doi.org/10.1108/JD-08-2022-0170.</mixed-citation></citation-alternatives></ref><ref id="cit6"><label>6</label><citation-alternatives><mixed-citation xml:lang="ru">Dieterle, E., Dede, C., &amp; Walker, M. (2024). The cyclical ethical effects of using artificial intelligence in education. AI &amp; Society, 39, 633–643. https://doi.org/10.1007/s00146-022-01497-w.</mixed-citation><mixed-citation xml:lang="en">Dieterle, E., Dede, C., &amp; Walker, M. (2024). The cyclical ethical effects of using artificial intelligence in education. AI &amp; Society, 39, 633–643. https://doi.org/10.1007/s00146-022-01497-w.</mixed-citation></citation-alternatives></ref><ref id="cit7"><label>7</label><citation-alternatives><mixed-citation xml:lang="ru">Dogtas, G., Ibitz, M.-P., Jonitz, F., Kocher, V., Poyer, A., &amp; Stapf, L. (2022). Kritik an rassifizierenden und diskriminierenden Titeln und Metadaten – Praxisorientierte Lösungsansätze. Critical Library Perspectives, 9(4), 1–14. https://doi.org/10.21428/1bfadeb6.abe15b5e.</mixed-citation><mixed-citation xml:lang="en">Dogtas, G., Ibitz, M.-P., Jonitz, F., Kocher, V., Poyer, A., &amp; Stapf, L. (2022). Kritik an rassifizierenden und diskriminierenden Titeln und Metadaten – Praxisorientierte Lösungsansätze. Critical Library Perspectives, 9(4), 1–14. https://doi.org/10.21428/1bfadeb6.abe15b5e.</mixed-citation></citation-alternatives></ref><ref id="cit8"><label>8</label><citation-alternatives><mixed-citation xml:lang="ru">Ebert, K., Bode, M., Haase, T., &amp; Keller, A. (2022). Mobile digitale Assistenzsysteme in der Weberei — Anforderungen an die kognitiv ergonomische Gestaltung. In Technologie und Bildung in hybriden Arbeitswelten. 68. GfA-Frühjahrskongress 2022 (pp. 1–6) (n. p.).</mixed-citation><mixed-citation xml:lang="en">Ebert, K., Bode, M., Haase, T., &amp; Keller, A. (2022). Mobile digitale Assistenzsysteme in der Weberei — Anforderungen an die kognitiv ergonomische Gestaltung. In Technologie und Bildung in hybriden Arbeitswelten. 68. GfA-Frühjahrskongress 2022 (pp. 1–6) (n. p.).</mixed-citation></citation-alternatives></ref><ref id="cit9"><label>9</label><citation-alternatives><mixed-citation xml:lang="ru">Favaretto, M., Clercq, E., Briel, M., &amp; Elger, B. (2020). Working through ethics review of big data research projects: an investigation into the experiences of Swiss and American researchers. Journal of Empirical Research on Human Research Ethics, 15(4), 339–354. https://doi.org/10.1177/1556264620935223.</mixed-citation><mixed-citation xml:lang="en">Favaretto, M., Clercq, E., Briel, M., &amp; Elger, B. (2020). Working through ethics review of big data research projects: an investigation into the experiences of Swiss and American researchers. Journal of Empirical Research on Human Research Ethics, 15(4), 339–354. https://doi.org/10.1177/1556264620935223.</mixed-citation></citation-alternatives></ref><ref id="cit10"><label>10</label><citation-alternatives><mixed-citation xml:lang="ru">Ferrara, E. (2023). Fairness and bias in artificial intelligence: A brief survey of sources, impacts, and mitigation strategies (preprint). https://doi.org/10.2196/preprints.48399.</mixed-citation><mixed-citation xml:lang="en">Ferrara, E. (2023). Fairness and bias in artificial intelligence: A brief survey of sources, impacts, and mitigation strategies (preprint). https://doi.org/10.2196/preprints.48399.</mixed-citation></citation-alternatives></ref><ref id="cit11"><label>11</label><citation-alternatives><mixed-citation xml:lang="ru">Finn, M., &amp; Shilton, K. (2023). Ethics governance development: the case of the Menlo Report. Social Studies of Science, 53(3), 315–340. https://doi.org/10.1177/03063127231151708.</mixed-citation><mixed-citation xml:lang="en">Finn, M., &amp; Shilton, K. (2023). Ethics governance development: the case of the Menlo Report. Social Studies of Science, 53(3), 315–340. https://doi.org/10.1177/03063127231151708.</mixed-citation></citation-alternatives></ref><ref id="cit12"><label>12</label><citation-alternatives><mixed-citation xml:lang="ru">Floridi, L. (2011). Enveloping the world for AI. The Philosophers’ Magazine, 54(54), 20–21. https://doi.org/10.5840/tpm20115437.</mixed-citation><mixed-citation xml:lang="en">Floridi, L. (2011). Enveloping the world for AI. The Philosophers’ Magazine, 54(54), 20–21. https://doi.org/10.5840/tpm20115437.</mixed-citation></citation-alternatives></ref><ref id="cit13"><label>13</label><citation-alternatives><mixed-citation xml:lang="ru">Floridi, L. (2023). The ethics of artificial intelligence: Principles, challenges, and opportunities. Oxford Univ. Press.</mixed-citation><mixed-citation xml:lang="en">Floridi, L. (2023). The ethics of artificial intelligence: Principles, challenges, and opportunities. Oxford Univ. Press.</mixed-citation></citation-alternatives></ref><ref id="cit14"><label>14</label><citation-alternatives><mixed-citation xml:lang="ru">Floridi, L., &amp; Taddeo, M. (2016). What is data ethics? Philosophical Transactions of the Royal Society a Mathematical Physical and Engineering Sciences, 374(2083), 20160360. https://doi.org/10.1098/rsta.2016.0360.</mixed-citation><mixed-citation xml:lang="en">Floridi, L., &amp; Taddeo, M. (2016). What is data ethics? Philosophical Transactions of the Royal Society a Mathematical Physical and Engineering Sciences, 374(2083), 20160360. https://doi.org/10.1098/rsta.2016.0360.</mixed-citation></citation-alternatives></ref><ref id="cit15"><label>15</label><citation-alternatives><mixed-citation xml:lang="ru">Ghasemaghaei, M., &amp; Kordzadeh, N. (2024). Understanding how algorithmic injustice leads to making discriminatory decisions: An obedience to authority perspective. Information &amp; Management, 61(2), 1–14. https://doi.org/10.1016/j.im.2024.103921.</mixed-citation><mixed-citation xml:lang="en">Ghasemaghaei, M., &amp; Kordzadeh, N. (2024). Understanding how algorithmic injustice leads to making discriminatory decisions: An obedience to authority perspective. Information &amp; Management, 61(2), 1–14. https://doi.org/10.1016/j.im.2024.103921.</mixed-citation></citation-alternatives></ref><ref id="cit16"><label>16</label><citation-alternatives><mixed-citation xml:lang="ru">Gualdi, F., &amp; Cordella, A. (2021). Artificial intelligence and decision-making: The question of accountability. In Proceedings of the 54th Hawaii International Conference on System Sciences (pp. 2297–2306) (n. p.). https://doi.org/10.24251/hicss.2021.281.</mixed-citation><mixed-citation xml:lang="en">Gualdi, F., &amp; Cordella, A. (2021). Artificial intelligence and decision-making: The question of accountability. In Proceedings of the 54th Hawaii International Conference on System Sciences (pp. 2297–2306) (n. p.). https://doi.org/10.24251/hicss.2021.281.</mixed-citation></citation-alternatives></ref><ref id="cit17"><label>17</label><citation-alternatives><mixed-citation xml:lang="ru">Haase, T., Keller, A., Radde, J., Berndt, D., Fredrich, H., &amp; Dick, M. (2020). Anforderungen an die lerntheoretische Gestaltung arbeitsplatzintegrierter VR-/AR-Anwendungen. In GfA Dortmund (Ed.). Digitaler Wandel, digitale Arbeit, digitaler Mensch? B.16.1., 1–7.</mixed-citation><mixed-citation xml:lang="en">Haase, T., Keller, A., Radde, J., Berndt, D., Fredrich, H., &amp; Dick, M. (2020). Anforderungen an die lerntheoretische Gestaltung arbeitsplatzintegrierter VR-/AR-Anwendungen. In GfA Dortmund (Ed.). Digitaler Wandel, digitale Arbeit, digitaler Mensch? B.16.1., 1–7.</mixed-citation></citation-alternatives></ref><ref id="cit18"><label>18</label><citation-alternatives><mixed-citation xml:lang="ru">Heidelmann, M. A., Weber, S. M. (2022). Eine Haltung ausbilden — Organisationen und Netzwerke beraten lernen. Mit symbolischen Ordnungen der Beratung zur Organisationspädagogischen Professionalisierung. In J. Elven, &amp; S. M. Weber (Eds.). Beratung in symbolischen Ordnungen. Organisationspädagogische Analysen sozialer Beratungspraxis (pp. 325–356). Springer. https://doi.org/10.1007/978-3-658-13090-9_17.</mixed-citation><mixed-citation xml:lang="en">Heidelmann, M. A., Weber, S. M. (2022). Eine Haltung ausbilden — Organisationen und Netzwerke beraten lernen. Mit symbolischen Ordnungen der Beratung zur Organisationspädagogischen Professionalisierung. In J. Elven, &amp; S. M. Weber (Eds.). Beratung in symbolischen Ordnungen. Organisationspädagogische Analysen sozialer Beratungspraxis (pp. 325–356). Springer. https://doi.org/10.1007/978-3-658-13090-9_17.</mixed-citation></citation-alternatives></ref><ref id="cit19"><label>19</label><citation-alternatives><mixed-citation xml:lang="ru">Hemphill, S., Jackson, K., Bradley, S., &amp; Bhartia, B. (2023). The implementation of artificial intelligence in radiology: a narrative review of patient perspectives. Future Healthcare Journal, 10(1), 63–68. https://doi.org/10.7861/fhj.2022-0097.</mixed-citation><mixed-citation xml:lang="en">Hemphill, S., Jackson, K., Bradley, S., &amp; Bhartia, B. (2023). The implementation of artificial intelligence in radiology: a narrative review of patient perspectives. Future Healthcare Journal, 10(1), 63–68. https://doi.org/10.7861/fhj.2022-0097.</mixed-citation></citation-alternatives></ref><ref id="cit20"><label>20</label><citation-alternatives><mixed-citation xml:lang="ru">Holterhoff, K. (2017). From disclaimer to critique: Race and the digital image archivist. Digital Humanities Quarterly, 11(3). http://www.digitalhumanities.org/dhq/vol/11/3/000324/000324.html.</mixed-citation><mixed-citation xml:lang="en">Holterhoff, K. (2017). From disclaimer to critique: Race and the digital image archivist. Digital Humanities Quarterly, 11(3). http://www.digitalhumanities.org/dhq/vol/11/3/000324/000324.html.</mixed-citation></citation-alternatives></ref><ref id="cit21"><label>21</label><citation-alternatives><mixed-citation xml:lang="ru">Jensen, U., &amp; Schüler-Springorum, S. (2013). Einführung: Gefühle gegen Juden. Die Emotionsgeschichte des modernen Antisemitismus. Geschichte Und Gesellschaft, 39(4), 413–442. https://doi.org/10.13109/gege.2013.39.4.413.</mixed-citation><mixed-citation xml:lang="en">Jensen, U., &amp; Schüler-Springorum, S. (2013). Einführung: Gefühle gegen Juden. Die Emotionsgeschichte des modernen Antisemitismus. Geschichte Und Gesellschaft, 39(4), 413–442. https://doi.org/10.13109/gege.2013.39.4.413.</mixed-citation></citation-alternatives></ref><ref id="cit22"><label>22</label><citation-alternatives><mixed-citation xml:lang="ru">Jobin, A., &amp; Ienca, M. (2019). The global landscape of ai ethics guidelines. Nature Machine Intelligence, 1, 389–399. https://doi.org/10.1038/s42256-019-0088-2.</mixed-citation><mixed-citation xml:lang="en">Jobin, A., &amp; Ienca, M. (2019). The global landscape of ai ethics guidelines. Nature Machine Intelligence, 1, 389–399. https://doi.org/10.1038/s42256-019-0088-2.</mixed-citation></citation-alternatives></ref><ref id="cit23"><label>23</label><citation-alternatives><mixed-citation xml:lang="ru">Jones, C., Castro, D. C., De Sousa Ribeiro, F., et al. (2024). A causal perspective on dataset bias in machine learning for medical imaging. Natural Machine Intelligence, 6, 138–146. https://doi.org/10.1038/s42256-024-00797-8.</mixed-citation><mixed-citation xml:lang="en">Jones, C., Castro, D. C., De Sousa Ribeiro, F., et al. (2024). A causal perspective on dataset bias in machine learning for medical imaging. Natural Machine Intelligence, 6, 138–146. https://doi.org/10.1038/s42256-024-00797-8.</mixed-citation></citation-alternatives></ref><ref id="cit24"><label>24</label><citation-alternatives><mixed-citation xml:lang="ru">Keller, A., Selinski, A., Vuong, T. H. C., &amp; Haase, T. (2024). Stakeholderspezifische Zugänge zu arbeitsgestalterischen Inhalten — technisch-didaktische Konzeption und erste Erkenntnisse. Arbeitswissenschaft in-the-loop. Mensch-Technologie-Integration und ihre Auswirkung auf Mensch, Arbeit und Arbeitsgestaltung, I.1.4, 1–6. https://doi.org/10.24406/publica-3888.</mixed-citation><mixed-citation xml:lang="en">Keller, A., Selinski, A., Vuong, T. H. C., &amp; Haase, T. (2024). Stakeholderspezifische Zugänge zu arbeitsgestalterischen Inhalten — technisch-didaktische Konzeption und erste Erkenntnisse. Arbeitswissenschaft in-the-loop. Mensch-Technologie-Integration und ihre Auswirkung auf Mensch, Arbeit und Arbeitsgestaltung, I.1.4, 1–6. https://doi.org/10.24406/publica-3888.</mixed-citation></citation-alternatives></ref><ref id="cit25"><label>25</label><citation-alternatives><mixed-citation xml:lang="ru">Keller, A., &amp; Weber, S. M. (2020). Trans-epistemic design-(research): Theorizing design within industry 4.0 and cognitive assistive systems. In Proceedings of the Design Society: DESIGN Conference (Vol. 1, pp. 627–636). Oxfοrd Univ. Press. https://doi.org/10.1017/dsd.2020.173.</mixed-citation><mixed-citation xml:lang="en">Keller, A., &amp; Weber, S. M. (2020). Trans-epistemic design-(research): Theorizing design within industry 4.0 and cognitive assistive systems. In Proceedings of the Design Society: DESIGN Conference (Vol. 1, pp. 627–636). Oxfοrd Univ. Press. https://doi.org/10.1017/dsd.2020.173.</mixed-citation></citation-alternatives></ref><ref id="cit26"><label>26</label><citation-alternatives><mixed-citation xml:lang="ru">Keller, A., Weber, S.M., Rentzsch, M. Haase, T. (2021). Lern-und Assistenzsysteme partizipativ integrieren – Entwicklung einer Systematik zur Prozessgestaltung auf Basis eines organisationspädagogischen Ansatzes. Zeitschrift für Arbeitswissenschaft, 75, 455–469. https://doi.org/10.1007/s41449-021-00279-2.</mixed-citation><mixed-citation xml:lang="en">Keller, A., Weber, S.M., Rentzsch, M. Haase, T. (2021). Lern-und Assistenzsysteme partizipativ integrieren – Entwicklung einer Systematik zur Prozessgestaltung auf Basis eines organisationspädagogischen Ansatzes. Zeitschrift für Arbeitswissenschaft, 75, 455–469. https://doi.org/10.1007/s41449-021-00279-2.</mixed-citation></citation-alternatives></ref><ref id="cit27"><label>27</label><citation-alternatives><mixed-citation xml:lang="ru">Kordzadeh, N., &amp; Ghasemaghaei, M. (2022). Algorithmic bias: review, synthesis, and future research directions. European Journal of Information Systems, 31(3), 388–409. https://doi.org/10.1080/0960085X.2021.1927212.</mixed-citation><mixed-citation xml:lang="en">Kordzadeh, N., &amp; Ghasemaghaei, M. (2022). Algorithmic bias: review, synthesis, and future research directions. European Journal of Information Systems, 31(3), 388–409. https://doi.org/10.1080/0960085X.2021.1927212.</mixed-citation></citation-alternatives></ref><ref id="cit28"><label>28</label><citation-alternatives><mixed-citation xml:lang="ru">Kroll, J. A. (2020). Accountability in computer systems. In M. D. Dubber, F. Pasquale, &amp; S. Das (Eds.). The Oxford handbook of ethics of AI (pp. 180–196). Oxford Univ. Press. https://doi.org/10.1093/oxfordhb/9780190067397.013.10.</mixed-citation><mixed-citation xml:lang="en">Kroll, J. A. (2020). Accountability in computer systems. In M. D. Dubber, F. Pasquale, &amp; S. Das (Eds.). The Oxford handbook of ethics of AI (pp. 180–196). Oxford Univ. Press. https://doi.org/10.1093/oxfordhb/9780190067397.013.10.</mixed-citation></citation-alternatives></ref><ref id="cit29"><label>29</label><citation-alternatives><mixed-citation xml:lang="ru">Manžuch, Z. (2017). Ethical issues in digitization of cultural heritage. Journal of Contemporary Archival Studies, 4, Article 4. 1–17.</mixed-citation><mixed-citation xml:lang="en">Manžuch, Z. (2017). Ethical issues in digitization of cultural heritage. Journal of Contemporary Archival Studies, 4, Article 4. 1–17.</mixed-citation></citation-alternatives></ref><ref id="cit30"><label>30</label><citation-alternatives><mixed-citation xml:lang="ru">Marchello, G., Giovanelli, R., Fontana, E., Cannella, F., &amp; Traviglia, A. (2023). Cultural heritage digital preservation through AI-driven robotics. In The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences (Vol. XLVIIIM-2-2023, pp. 995–1000) (n. p.). https://doi.org/10.5194/isprs-archives-XLVIIIM-2-2023-995-2023.</mixed-citation><mixed-citation xml:lang="en">Marchello, G., Giovanelli, R., Fontana, E., Cannella, F., &amp; Traviglia, A. (2023). Cultural heritage digital preservation through AI-driven robotics. In The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences (Vol. XLVIIIM-2-2023, pp. 995–1000) (n. p.). https://doi.org/10.5194/isprs-archives-XLVIIIM-2-2023-995-2023.</mixed-citation></citation-alternatives></ref><ref id="cit31"><label>31</label><citation-alternatives><mixed-citation xml:lang="ru">Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., &amp; Galstyan, A. (2021). A survey on bias and fairness in machine learning. ACM Computing Surveys, 54(6), 1–35. Article 115. https://doi.org/10.1145/3457607.</mixed-citation><mixed-citation xml:lang="en">Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., &amp; Galstyan, A. (2021). A survey on bias and fairness in machine learning. ACM Computing Surveys, 54(6), 1–35. Article 115. https://doi.org/10.1145/3457607.</mixed-citation></citation-alternatives></ref><ref id="cit32"><label>32</label><citation-alternatives><mixed-citation xml:lang="ru">Metcalf, J., &amp; Crawford, K. (2016). Where are human subjects in big data research? The emerging ethics divide. Big Data &amp; Society, 3(1), 1–14. https://doi.org/10.1177/2053951716650211.</mixed-citation><mixed-citation xml:lang="en">Metcalf, J., &amp; Crawford, K. (2016). Where are human subjects in big data research? The emerging ethics divide. Big Data &amp; Society, 3(1), 1–14. https://doi.org/10.1177/2053951716650211.</mixed-citation></citation-alternatives></ref><ref id="cit33"><label>33</label><citation-alternatives><mixed-citation xml:lang="ru">Modest, W. (2018). Words matter. In Р. Lelijveld (Ed.). Words Matter. An unfinished guide to word choices in the cultural sector (E-book, pp. 13–17). Tropen Museum, Afrika Museum, Museum Volkenkunde, Wereled Museum. URL: https://www.materialculture.nl/en/publications/words-matter.</mixed-citation><mixed-citation xml:lang="en">Modest, W. (2018). Words matter. In Р. Lelijveld (Ed.). Words Matter. An unfinished guide to word choices in the cultural sector (E-book, pp. 13–17). Tropen Museum, Afrika Museum, Museum Volkenkunde, Wereled Museum. URL: https://www.materialculture.nl/en/publications/words-matter.</mixed-citation></citation-alternatives></ref><ref id="cit34"><label>34</label><citation-alternatives><mixed-citation xml:lang="ru">Naß, M. A. (2020). Was darf die Kunst(institution)? Zwischen dem white cube als safe space und Zensur als Neurechter Kampfbegriff. In C. M. Ruederer (Ed.). Infrastructures — Online Reader. Kunstverein München e. V. URL: https://www.kunstverein-muenchen.de/de/programm/programmreihen/2020/infrastructures/online-reader.</mixed-citation><mixed-citation xml:lang="en">Naß, M. A. (2020). Was darf die Kunst(institution)? Zwischen dem white cube als safe space und Zensur als Neurechter Kampfbegriff. In C. M. Ruederer (Ed.). Infrastructures — Online Reader. Kunstverein München e. V. URL: https://www.kunstverein-muenchen.de/de/programm/programmreihen/2020/infrastructures/online-reader.</mixed-citation></citation-alternatives></ref><ref id="cit35"><label>35</label><citation-alternatives><mixed-citation xml:lang="ru">Naumann, K. (2023). Ethische Grundlagen der Onlinestellung von digitalisierem Archivgut und deren Umsetzung. Recht und Zugang, 4(3), 237–252. https://doi.org/10.5771/2699-1284-2023-3.</mixed-citation><mixed-citation xml:lang="en">Naumann, K. (2023). Ethische Grundlagen der Onlinestellung von digitalisierem Archivgut und deren Umsetzung. Recht und Zugang, 4(3), 237–252. https://doi.org/10.5771/2699-1284-2023-3.</mixed-citation></citation-alternatives></ref><ref id="cit36"><label>36</label><citation-alternatives><mixed-citation xml:lang="ru">Nickel, P. (2022). Trust in medical artificial intelligence: A discretionary account. Ethics and Information Technology, 24(1), Article 7. https://doi.org/10.1007/s10676-022-09630-5.</mixed-citation><mixed-citation xml:lang="en">Nickel, P. (2022). Trust in medical artificial intelligence: A discretionary account. Ethics and Information Technology, 24(1), Article 7. https://doi.org/10.1007/s10676-022-09630-5.</mixed-citation></citation-alternatives></ref><ref id="cit37"><label>37</label><citation-alternatives><mixed-citation xml:lang="ru">Novelli, C., Taddeo, M., &amp; Floridi, L. (2024). Accountability in artificial intelligence: what it is and how it works. AI &amp; Society, 39, 1871–1882. https://doi.org/10.1007/s00146-023-01635-y.</mixed-citation><mixed-citation xml:lang="en">Novelli, C., Taddeo, M., &amp; Floridi, L. (2024). Accountability in artificial intelligence: what it is and how it works. AI &amp; Society, 39, 1871–1882. https://doi.org/10.1007/s00146-023-01635-y.</mixed-citation></citation-alternatives></ref><ref id="cit38"><label>38</label><citation-alternatives><mixed-citation xml:lang="ru">Ochang, P., Stahl, B., &amp; Eke, D. (2022). The ethical and legal landscape of brain data governance. Plos One, 17(12), e0273473. https://doi.org/10.1371/journal.pone.0273473.</mixed-citation><mixed-citation xml:lang="en">Ochang, P., Stahl, B., &amp; Eke, D. (2022). The ethical and legal landscape of brain data governance. Plos One, 17(12), e0273473. https://doi.org/10.1371/journal.pone.0273473.</mixed-citation></citation-alternatives></ref><ref id="cit39"><label>39</label><citation-alternatives><mixed-citation xml:lang="ru">O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown.</mixed-citation><mixed-citation xml:lang="en">O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown.</mixed-citation></citation-alternatives></ref><ref id="cit40"><label>40</label><citation-alternatives><mixed-citation xml:lang="ru">Pilgrim, D. (n. d., retrieved 2005). The garbage man: Why I collect racist objects. In Jim Crow Museum. https://jimcrowmuseum.ferris.edu/collect.htm.</mixed-citation><mixed-citation xml:lang="en">Pilgrim, D. (n. d., retrieved 2005). The garbage man: Why I collect racist objects. In Jim Crow Museum. https://jimcrowmuseum.ferris.edu/collect.htm.</mixed-citation></citation-alternatives></ref><ref id="cit41"><label>41</label><citation-alternatives><mixed-citation xml:lang="ru">Poel, I. (2020). Embedding values in artificial intelligence (AI) systems. Minds and Machines, 30(3), 385–409. https://doi.org/10.1007/s11023-020-09537-4.</mixed-citation><mixed-citation xml:lang="en">Poel, I. (2020). Embedding values in artificial intelligence (AI) systems. Minds and Machines, 30(3), 385–409. https://doi.org/10.1007/s11023-020-09537-4.</mixed-citation></citation-alternatives></ref><ref id="cit42"><label>42</label><citation-alternatives><mixed-citation xml:lang="ru">Raji, I. D., Smart, A., White, R. N., Mitchell, M., Gebru, T., Hutchinson, B., Smith-Loud, J., Theron, D., &amp; Barnes, P. (2020). Closing the AI accountability gap: defining an endto-end framework for internal algorithmic auditing. In M. Hildebrandt, &amp; C. Castillo (Eds.). FAT*’20: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 33–44). Association for Computing Machinery. https://doi.org/10.1145/3351095.3372873.</mixed-citation><mixed-citation xml:lang="en">Raji, I. D., Smart, A., White, R. N., Mitchell, M., Gebru, T., Hutchinson, B., Smith-Loud, J., Theron, D., &amp; Barnes, P. (2020). Closing the AI accountability gap: defining an endto-end framework for internal algorithmic auditing. In M. Hildebrandt, &amp; C. Castillo (Eds.). FAT*’20: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 33–44). Association for Computing Machinery. https://doi.org/10.1145/3351095.3372873.</mixed-citation></citation-alternatives></ref><ref id="cit43"><label>43</label><citation-alternatives><mixed-citation xml:lang="ru">Randby, T., &amp; Marciano, R. (2020). Digital Curation and Machine Learning Experimentation in Archives. In Xintao Wu et al. (Eds.). 2020 IEEE International Conference on Big Data (Big Data) (pp. 1904–1913). IEEE. https://doi.org/10.1109/BigData50022.2020.9377788.</mixed-citation><mixed-citation xml:lang="en">Randby, T., &amp; Marciano, R. (2020). Digital Curation and Machine Learning Experimentation in Archives. In Xintao Wu et al. (Eds.). 2020 IEEE International Conference on Big Data (Big Data) (pp. 1904–1913). IEEE. https://doi.org/10.1109/BigData50022.2020.9377788.</mixed-citation></citation-alternatives></ref><ref id="cit44"><label>44</label><citation-alternatives><mixed-citation xml:lang="ru">Rantanen, M., Hyrynsalmi, S., &amp; Hyrynsalmi, S. (2019). Towards ethical data ecosystems: A literature study. In 2019 IEEE International Conference on Engineering, Technology and Innovation (ICE/ITMC): Conference Proceedings ICE/IEEE ITMC 2019 #47383: Cocreating our Future: Scaling-up Innovation Capacities through the Design and Engineering of Immersive, Collaborative, Empathic and Cognitive Systems (pp. 1–9). ΙΕΕΕ. https://doi.org/10.1109/ice.2019.8792599.</mixed-citation><mixed-citation xml:lang="en">Rantanen, M., Hyrynsalmi, S., &amp; Hyrynsalmi, S. (2019). Towards ethical data ecosystems: A literature study. In 2019 IEEE International Conference on Engineering, Technology and Innovation (ICE/ITMC): Conference Proceedings ICE/IEEE ITMC 2019 #47383: Cocreating our Future: Scaling-up Innovation Capacities through the Design and Engineering of Immersive, Collaborative, Empathic and Cognitive Systems (pp. 1–9). ΙΕΕΕ. https://doi.org/10.1109/ice.2019.8792599.</mixed-citation></citation-alternatives></ref><ref id="cit45"><label>45</label><citation-alternatives><mixed-citation xml:lang="ru">Ryan, M. (2020). In AI we trust: Ethics, artificial intelligence, and reliability. Science and Engineering Ethics, 26(5), 2749–2767. https://doi.org/10.1007/s11948-020-00228-y.</mixed-citation><mixed-citation xml:lang="en">Ryan, M. (2020). In AI we trust: Ethics, artificial intelligence, and reliability. Science and Engineering Ethics, 26(5), 2749–2767. https://doi.org/10.1007/s11948-020-00228-y.</mixed-citation></citation-alternatives></ref><ref id="cit46"><label>46</label><citation-alternatives><mixed-citation xml:lang="ru">Shabani, M., Thorogood, A., &amp; Murtagh, M. (2021). Data access governance. In G. Laurie et al. (Eds.). The Cambridge handbook of health research regulation (pp. 187–196). Cambridge Univ. Press. https://doi.org/10.1017/9781108620024.023.</mixed-citation><mixed-citation xml:lang="en">Shabani, M., Thorogood, A., &amp; Murtagh, M. (2021). Data access governance. In G. Laurie et al. (Eds.). The Cambridge handbook of health research regulation (pp. 187–196). Cambridge Univ. Press. https://doi.org/10.1017/9781108620024.023.</mixed-citation></citation-alternatives></ref><ref id="cit47"><label>47</label><citation-alternatives><mixed-citation xml:lang="ru">Strickert, M. (2021). Zwischen Normierung und Offenheit – Potenziale und offene Fragen bezüglich kontrollierter Vokabulare und Normdateien. LIBREAS. Library Ideas, 40, 1–19. https://doi.org/10.18452/23807.</mixed-citation><mixed-citation xml:lang="en">Strickert, M. (2021). Zwischen Normierung und Offenheit – Potenziale und offene Fragen bezüglich kontrollierter Vokabulare und Normdateien. LIBREAS. Library Ideas, 40, 1–19. https://doi.org/10.18452/23807.</mixed-citation></citation-alternatives></ref><ref id="cit48"><label>48</label><citation-alternatives><mixed-citation xml:lang="ru">Subías-Beltrán, P., Pitarch, C., Migliorelli, C., Marte, L., Galofré, M., &amp; Orte, S. (2024). The role of transparency in AI-driven technologies: Targeting healthcare. In E. P. Dados (Ed.). Artificial intelligence — Ethical and legal challenges (forthcoming; online first). 1–21. http://dx.doi.org/10.5772/intechopen.1007444.</mixed-citation><mixed-citation xml:lang="en">Subías-Beltrán, P., Pitarch, C., Migliorelli, C., Marte, L., Galofré, M., &amp; Orte, S. (2024). The role of transparency in AI-driven technologies: Targeting healthcare. In E. P. Dados (Ed.). Artificial intelligence — Ethical and legal challenges (forthcoming; online first). 1–21. http://dx.doi.org/10.5772/intechopen.1007444.</mixed-citation></citation-alternatives></ref><ref id="cit49"><label>49</label><citation-alternatives><mixed-citation xml:lang="ru">Whittlestone, J., Nyrup, R., Alexandrova, A., &amp; Cave, S. (2019). The role and limits of principles in AI ethics: Towards a focus on tensions. In V. Conitzer et al. (Eds.). AIES’1: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (pp. 195–200). Association for Computing Machinery. https://doi.org/10.1145/3306618.3314289.</mixed-citation><mixed-citation xml:lang="en">Whittlestone, J., Nyrup, R., Alexandrova, A., &amp; Cave, S. (2019). The role and limits of principles in AI ethics: Towards a focus on tensions. In V. Conitzer et al. (Eds.). AIES’1: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (pp. 195–200). Association for Computing Machinery. https://doi.org/10.1145/3306618.3314289.</mixed-citation></citation-alternatives></ref><ref id="cit50"><label>50</label><citation-alternatives><mixed-citation xml:lang="ru">Yang, H. F., Zhao, Y., Cai, J., Zhu, M., Hwang, J. -N., &amp; Chen, Y. (2024). Mitigating bias of deep neural networks for trustworthy traffic perception in autonomous systems. In 2024 IEEE Intelligent Vehicles Symposium (IV) (pp. 633–638). IEEE. https://doi.org/10.1109/IV55156.2024.10588805.</mixed-citation><mixed-citation xml:lang="en">Yang, H. F., Zhao, Y., Cai, J., Zhu, M., Hwang, J. -N., &amp; Chen, Y. (2024). Mitigating bias of deep neural networks for trustworthy traffic perception in autonomous systems. In 2024 IEEE Intelligent Vehicles Symposium (IV) (pp. 633–638). IEEE. https://doi.org/10.1109/IV55156.2024.10588805.</mixed-citation></citation-alternatives></ref><ref id="cit51"><label>51</label><citation-alternatives><mixed-citation xml:lang="ru">Zaagsma, G. (2023). Digital history and the politics of digitization. Digital Scholarship in the Humanities, 38(2), 830–851. https://doi.org/10.1093/llc/fqac050.</mixed-citation><mixed-citation xml:lang="en">Zaagsma, G. (2023). Digital history and the politics of digitization. Digital Scholarship in the Humanities, 38(2), 830–851. https://doi.org/10.1093/llc/fqac050.</mixed-citation></citation-alternatives></ref></ref-list><fn-group><fn fn-type="conflict"><p>The authors declare that there are no conflicts of interest present.</p></fn></fn-group></back></article>
