Summaries of previous meetings of the IUNTC
AI-based iiRDS tagging of technical documents
Summary of the IUNTC meeting on November 14 by Gerald Adam Zwettler.
The IUNTC presentation by Gerald Zwettler explained the use and challenges of large language models such as ChatGPT and similar applications, particularly for the automated indexing of technical documents.
Large language models, such as ChatGPT, are used by many people for various tasks, including translation, summarizing texts, improving texts or writing short letters. Dr Zwettler introduced the Text-IT project as a case study of using AI for iiRDS tagging of technical documentation. Findings from this case indicate the challenges of using such tools in a professional, deterministic environment, because it is difficult to achieve consistent and reliable results with a stochastic system, as is required for the indexing of technical documentation.
_____________________________________________________________________________________
Gerald Adam Zwettler - University of Applied Sciences Upper Austria
Gerald Adam Zwettler completed his diploma and master's degree at the University of Applied Sciences Upper Austria, Hagenberg Campus and received his doctorate from the University of Vienna with a thesis on feature-based generic classification. Since then, he has been teaching and researching at the Hagenberg Campus of the University of Applied Sciences Upper Austria in the fields of signal and image processing, computer vision, project development and machine learning. Since 2018, he has headed the AIST (advanced information systems and technology) research group, which conducts application-oriented research in the fields of data science, machine learning and computer vision.
November 2024 - written by Yvonne Cleary & Daniela Straub

Training of large language models (LLMs)
Large language models (LLMs) are trained on a broad base of text data. Their unsupervised learning is based on recognizing patterns in the order of letters and words. Basically, they learn in an unsupervised process how letters follow each other in languages such as German or English. This approach makes it possible to identify larger linguistic patterns. However, there are also areas where large language models show weaknesses. Their performance can be improved through additional methods such as reinforcement learning and other learning approaches. One particularly effective approach is self-supervised learning. Thanks to the vast amount of text on the internet - from books to articles to websites - language models can be trained efficiently. The concept of cloze texts is often used here. Parts of a text are removed and the model is trained to insert the missing words or phrases correctly. This method is extremely flexible: texts from almost any field - whether law, history or fiction - can be used. The principle is simple: a text is partially emptied and the model learns to fill in the gaps with the best possible words. This enables an efficient learning process based on an almost unlimited amount of training data. This approach allows models to be continuously improved and further developed. This is the core of how large language models are trained and why they are so powerful.
Improvement of prompt engineering and prompt tuning
To achieve near-optimal or even deterministic results with LLMs, an important question is how to improve prompt engineering and prompt tuning in a standardized and professional way. The concept of so-called "mega-prompts"involves dividing each task that a large language model is to perform into clearly defined sections. These sections should be formulated as precisely as possible.
The first step is to assign a clear role to the language model. For example, we could say: "You are an expert in technical documentation. Please help us." The model should always know in which role it should act - be it as an expert, teacher or advisor. This is particularly important in educational contexts: students should learn to make specific requests, e.g: "I am a student, I am having problems with this task, and I need specific support." Clearly assigning roles in this way significantly improves results.
The task should be described precisely, including the individual work steps. Large language models benefit when complex tasks are broken down into smaller steps that can be processed iteratively. An example:
- Analyze the text.
- Think about the content.
- Structure the text.
- Create a summary in bullet points according to certain specifications.
By specifying such steps and instructions, the model can work more efficiently and in a more targeted manner.
A language model can only work with the information that is provided to it. Without context - such as geographical, temporal or situational background,the model will not be able to provide optimal answers. An example: "We are in Central Europe, it is winter. Should I wear a jacket today?" This type of context must be specified explicitly, as the model has no situational awareness of its own.
The desired output format should also be clearly defined, especially if there are specific requirements, such as compliance with certain standards or structural specifications. Structured formatted texts are much more useful than unstructured output.
An effective mega prompt could look like this:
- Role: "You are a research expert."
- Task: "Formulate a precise summary."
- Steps: "1. analyze the source. 2. structure the content. 3. create a summary according to a list of points."
- Secondary conditions: "Follow specific guidelines and provide the response in XML format."
Such detailed prompts lead to much better results than simply entering a text and hoping for useful results.
Language models can also be enriched with specific commands, e.g. for multiple-choice tasks, XML exports or other automated processes. In the education and e-learning sector, exams or content could be created efficiently in this way. By introducing parameters, models can even be controlled programmatically in order to meet specific requirements - such as legal issues or content-related topics.
A particularly interesting aspect of prompt optimization is that the language model itself often knows best how to handle its functions. If a prompt needs to be improved, AI itself can help. For example, if you are unsure how an optimal prompt should be designed, you can ask the language model directly: "You are now a prompt generator. Please define the best prompt for this task." This is a new and effective approach where the model itself suggests how it can best be used. In an earlier project, the language model was even able to act as a tokenizer: It suggested splitting texts into smaller units to accomplish the task more efficiently.
The following procedure is recommended to achieve good output results:
- Divide tasks into smaller steps: Large language models deliver better results when tasks are broken down step by step and into manageable units. In this way, the model can work more efficiently and avoid errors.
- Validate results: There is always a certain risk of so-called "hallucinations" (false or invented information). Repeat the same task several times and compare the results to ensure consistencySeveral runs can help to identify optimal solutions.
- Provide examples: Show the model concrete examples of the expected results or solutions. The more examples provided, the better the model can learn to fulfill the task.
- Iterative improvement: If the results are not satisfactory, work iteratively and adjust prompts or framework conditions.
AI and LLMs can be used for almost any task - be it for optimizing prompts or structuring complex processes - making stochastic language models more deterministic and reliable in their use.
Automatic indexing of documents
The objective of the "Text-IT" project was to implement automatic or semi-automatic tagging of documents, whereby the tags had to comply with the iiRDS standard. There were restrictions regarding the input format. Image data was excluded as it was not part of the project scope, although good progress is now being made in image analysis. Large language models were used to achieve the project goals.
Some instructions were extracted from the document text and made available to the model via prompts. This required not only importing the commands, but also the entire text, which slightly increased the workload. The model should respond with a standard-compliant list of texts, which was available in a textual form. A structured output, ideally in XML or JSON format, makes further processing much easier. A pure text output, on the other hand, would have been less useful from an IT perspective. An example of an expected output could look like this:
- Language: German
- Information topic: Generic collection
- Conformity: Generic conformity
Of course, it also required post-processing to put everything in the right context.
Different variants of interaction with the language models were also examined. A distinction was made between tagging (assignment of predefined tags) and labeling (user-defined assignment).
Tagging uses predefined options (e.g. subject areas) from which the language model selects the appropriate tags. This approach is easier to handle and enables consistent results.
In the second type of interaction, labeling, user-defined tags were used. Training was somewhat more difficult here. While automated tagging quickly reaches its limits, especially when adapting to new subject areas, Labeling offers a little more flexibility, but is more complex.
Another aspect of keywording is the number of calls. A single call to complete the entire task and obtain a result would only work to a limited extent, as the language model would be overloaded. A better strategy was to split the document and the text and divide the task into several calls. With this so-called multicall strategy, the results were much more precise as smaller tasks could be targeted.
The evaluation compared the different strategies and their impact on outcomes. We specified context, purpose, topics, rules, location, aspects and boundaries, among other things, in order to formulate the prompts precisely. But the most important factor is how prompts are defined; strongly defined prompts are crucial to achieve the best results.
The model configuration was deliberately generic so that parameters such as the model, the input prompt and the temperature could be easily adapted via configuration files. This means that when new or cheaper models become available, a simple line change can be made without modifying the code.
The "temperature" in this context influences how creatively or neutrally the model responds. For technical documentation, strictly neutral responses were to be achieved, so the temperature was set correspondingly low. The model also needed to be provided with the iiRDs statistics and associated commands.
The prototype of the user interface was very similar to the final goal: the AI automatically assigns texts to tags, with the user having the option to make subsequent adjustments if errors or inconsistencies occur. When the text is extracted from a PDF, the automatic assignment takes place and the user can correct it as required.
When evaluating the accuracy of the emulation, it was found that the results for closed tagging were very good, especially for predefined texts. However, problems occurred with the number of tags, which is why an emulation metric was developed to correctly evaluate missing tags in the error statistics. In the free text analysis, it was more difficult to make an exact assignment, as the similarity of the text had to be compared..A similarity matrix was used to help identify and correct errors, although this is still a challenge.
The results were also compared with those of human experts in the field of technical documentation. The agreement was good, but there were discrepancies, especially in the interpretation of tags such as "caution", "warning", "danger" or "note". These terms lie in a gray area where different interpretations are possible, even if their definitions are relatively clear. This subjectivity was an important learning point in the project.
In terms of cost, the calculations for tokens and a good model were relatively inexpensive, minimizing the need for manual review by experts.
Findings from this case suggest that if the human effort required to check and correct the results is less than the entirely manual processing of the document, AI costs can become insignificant. This shows the potential of a business model in which AI provides support while manual control of the results is retained.
In summary, it can be said that large language models can certainly be used for technical documentation, but some critical aspects require fine-tuning.
Artificial Intelligence Literacy and Adjacent Digital Literacies for the Digitalised and Datafied Language Industry
A summary of the talk by Professor Ralph Krüger - Institute of Translation and Multilingual Communication at TH Köln - University of Applied Sciences, Cologne, Germany at the International University Network in Technical Communication (IUNTC) on October 31, 2024
Prelude
“Just when we thought that the neural machine translation systems dating from 2016 were the main instance of automation we had to grapple with, the application of large language models (LLMs) to automated text generation […] from late 2022 has brought automated language processes to a new level, not just in translating but also in revising and adapting texts to user-specified receptor profiles. The challenge now is to predict the consequences for long-term translator employment and thereby to adapt our training to a new professional environment.” (Ayvazyan/Torres-Simón/Pym 2024:122)
“There is little doubt that AI reconfigures the distribution of intelligence, labour and power between humans and machines, and thus new kinds of capabilities are needed […].” (Markauskaite et al. 2022:2)
In this talk, Prof. Krüger began with a discussion of the impact of AI in the language industry, outlined AI literacies, and then described the individualdimensions of the AI Literacy Framework for Translation, Interpreting and Specialized Communication.
_____________________________________________________________________________________
Prof. Dr. Ralph Krüger, University of Applied Sciences, Cologne - Germany
Ralph Krüger is Professor of Language and Translation Technology at the Institute of Translation and Multilingual Communication at TH Köln – University of Applied Sciences, Cologne, Germany. He received his PhD in translation studies from the University of Salford, UK, in 2014 and completed his habilitation at Johannes Gutenberg University Mainz, Germany, in 2024. His current research focuses on the performance of neural machine translation (NMT) and large language models (LLMs) in the specialised translation process and on didactic strategies and resources for teaching the technical basics of NMT/LLMs to students of translation and specialised communication programmes. ORCID: orcid.org/0000-0002-1009-3365
November 2024 - written by Yvonne Cleary & Daniela Straub


Introduction
At one point, at least in the field of translation, it felt like we were at the cutting edge: Neural machine translation was well researched, findings were available, syllabuses and courses had been worked out. But the emergence of ChatGPT means we now have to deal with many new developments. While ChatGPT is not necessarily better at translation than, for example, DeepL, it allows for fine-tuning and customization of output to a much greater degree than ‘traditional’ neural machine translation. Language models open up a wide range of tasks beyond machine translation that can be (partially) automated in language industry workflows.
Recently, we have entered the era of multimodal LLMs. These models, such as GPT-4o, seamlessly integrate different modalities such as text, voice, images and video. Thus, we have moved from purely text-based models to modern multimodal models, which are summarized under "general-purpose AI". The European Parliament Research Service defines general purpose AI as "[M]achines designed to perform a wide range of intelligent tasks, think abstractly and adapt to new situations” (European Parliamentary Research Service 2023:1).
The extended capabilities of these models go far beyond machine translation. As figure 1 below shows, they can be used, among other things, to support project management, terminology work, automatic terminology extraction, quality assessment and control as well as automatic post-editing.

Recording AI skills
Since the invention of the computer, numerous digital skills have emerged which are summarized under terms such as computer literacy, information literacy, media literacy, and data literacy. The collective term for these is digital literacy, which also includes machine translation skills. AI literacy is the most recent addition to this group of skills. Long and Magerko define AI literacy as "[A] set of competencies that enable individuals to critically evaluate AI technologies, communicate and collaborate effectively with AI, and use AI as a tool online, at home and at work." (Long/Magerko 2020:1).
According to Markauskaite et al., “there is little doubt that AI reconfigures the distribution of intelligence, labour and power between humans and machines, and thus new kinds of capabilities are needed […].” (Marcus Kaite et al. 2022:2). These capabilities are often discussed under the term "AI literacy".
If you look at the topic of AI from the perspective of skills, there are different approaches. One question could be which human skills could become obsolete and which skills could be lost in humans because the work is done by language models. E.g, Sandrini has developed the so-called ‘translator obsolescence cycle’, which depicts the potential loss of certain skills as the degree of automation increases. Another recent study by Ayvazyan et al. examines which skills remain largely unaffected by large language models and could become increasingly important for human translators; this shift could mean higher qualifications in the profession in the long term. This echoes Olohan's statement in relation to neural machine translation: “[A]t least in the foreseeable future, it seems appropriate to think about the increasing use of technology as being frequently accompanied by an upskilling of translators, which is reflected in the need for translators to receive specific postgraduate training and education” (Olohan 2017:277–278).
From this perspective, the technologies that are currently available have potential, but they require the integration of people into quite complex work processes.
The AI Literacy Framework for Translation, Interpreting and Specialized Communication aims to reflect this upskilling approach and focuses on modern large language models as well as new technologies that could follow on from these models. The framework examines the impact of these technologies on translation, interpreting and specialized communication from a competence perspective - and also from a university teaching perspective to answer the question of what future students should learn in this field.
Overall AI literacy is divided into several dimensions: technical foundations, domain-specific performance, interaction, implementation and ethical and societal aspects, as illustrated in Figure 2.
Technical foundations
The technical dimension includes understanding how modern AI technologies and artificial neural networks work, especially the Transformer architecture with its self-attention mechanism and the use of word and sentence embeddings. Aspects such as the training methods of these networks, the training pipeline, fine-tuning and bringing the models into line with certain human values are also included. Another important aspect is the difference between natural and synthetic training data and their effects. It is crucial to understand that in modern AI technologies, the synthesis of AI model and training data forms an inseparable whole, where the weights of the model are calculated based on the training data - comparable to an omelette that cannot be broken down into its original ingredients. In addition, the technical dimension also includes the watermarking of AI-generated content, which is particularly important with regard to potential manipulation.
One might wonder why such technical foundations are necessary in the context of translation, interpreting or specialized communication and are not only relevant for computational linguists. A quote from Dorothy Kenny illustrates this point well: she emphasizes that working with "black box" technologies such as AI models without a basic understanding of how they work can lead to disempowerment of those who use these technologies. As long as we don't know what these technologies can do and what their limitations are, they work like magic - and this can prevent us from critically questioning their promises. A solid technical foundation helps to counter claims that promote new technologies as a panacea and enables us to use these technologies responsibly and critically.
Domain-specific performance
The domain-specific performance dimension is about the scope of the capabilities of these models. It concerns the level of performance for specific tasks, the identification of human added value, input-output modalities, their future potential and potential machine circularities. Determining the scope of capabilities and the task-specific performance level is not as trivial as it sounds, as the capabilities of these AI technologies remain somewhat opaque due to their general-purpose character (see figure 1 above).
An important point here is: what is the human added value? We need to ask ourselves this question in the future - not only for translation, but also for other professional fields in which these models perform impressively. For example, if a task cannot currently be automated by GPT - what happens when GPT-4 or a future version such as GPT-5 can take over this task? There are also dangers, such as those of introducing machine circularities, because these models are so versatile that one and the same model could produce a text, machine pre-process, translate and post-process that text, and finally perform a quality assessment of that text. This is theoretically possible, but from a risk diversification perspective, it is better to distribute these processes across multiple technologies and, if possible, multiple human experts. This is also where the concept of the "expert in the loop" comes into play, in which humans are involved in AI-assisted workflows in various ways remain responsible for the final quality of a product.
Psychologist Daniel Kahneman makes a general distinction between fast and slow thinking. There is one system for fast, instinctive, automatic thinking that requires little to no cognitive effort, and another for slow, rational thinking that involves more complex decisions. Andrej Karpathy and others have drawn this parallel with LLMs. They say that LLMs excel at thinking quickly and effortlessly. However, they are less capable of more elaborate, abstract and complex thinking that humans excel at. So if you ask about the overall added value of humans, this can be found above all in their ability to think in complex, abstract ways.
As Bubeck et al put it: “[GPT-4] relies on a local and greedy process of generating the next word, without any global or deep understanding of the task or the output. Thus, the model is good at producing fluent and coherent texts, but has limitations with regards to solving complex or creative problems which cannot be approached in a sequential manner.” (Bubeck et al. 2023:80).
OpenAI has a new series of models, the so-called "o1 series", in which the models are trained to go through a "chain of thought" before answering a question, in which the model breaks down a complex task into specific subtasks and then processes them one after the other. It has been shown that the "thinking skills" of this o1 series have been improved by this internal “chain of thought”. However, it is still unclear how far these models can be further developed in this respect.
Interaction
The interaction dimension covers knowledge of the available modalities of interaction, such as writing, speaking and potentially gesture control. It also includes specific pre- and post-editing, prompting, as well as the cognitive level, which concerns changes in the areas of receptive or productive skills or cognitive effects in a hybrid human-AI system. This also includes the development of adaptive expertise or a macro-strategy. The action level is also an important component of this dimension.
This dimension is particularly important from a human perspective. The question arises as to how to interact appropriately with these models in certain usage contexts and for specific tasks in which these models are intended to provide support. This requires knowledge of the modalities of interaction that are available today. Currently, we mainly write text-based prompts, but if you use OpenAI on a smartphone, for example, you can simply talk to the model. Future widespread interaction with these models may therefore no longer rely solely on written input.
Interaction may also include AI-specific pre- and post-editing beyond the field of MT. For example, one could imagine that developer documentation for a technical assembly or machine is entered into the AI and GPT then converts this documentation into user documentation. This would then be post-edited by a human expert.
There is also the cognitive level of interaction, which deals with possible changes in the areas of receptive or productive skills of AI users. The ideal notion is that of humans and machines working together and complementing each other. Ideally, our weaknesses are complemented by GPT and vice versa, but in practice this is not always the case. There could also be potential negative effects, such as stagnation in competence development or de-skilling, if too much reliance is placed on AI output. There is some evidence from machine translation research that translators who post-edit machine-translated text often still retain traces of the machine output. This issue will probably also play a role in the interaction with LLMs.
The same applies to the level of action. How is the ability to act distributed between humans and machines? There are concepts of collaborative agency that ultimately aim to expand the agency of both humans and machines so that they work together as one system.
The term "prompting" covers actions that humans use to guide LLMs to perform the intended tasks. Prompting is increasingly seen as an expert skill and is not trivial. A study by Zamfirescu-Pereira et al. showed how non-AI experts designed their prompts opportunistically and not systematically. They tended to interact with LLMs as one would interact with a human, without specifically considering that they were working with large language models. This led to overgeneralization, where solutions from one context were applied to others, often with ineffective results.
The OpenAI website has strategies and tactics for prompt engineering that show that there is more to prompting than meets the eye. Also, White et al. have developed a prompt pattern catalog that originated in software programming. Andrej Karpathy argues that the hottest new programming language is English, which means that this natural language interaction can be seen as a kind of programming where you instruct the LLM to perform certain tasks.
Implementation
The implementation dimension includes the establishment of an AI culture, the selection of an AI model, quality control, economic aspects, AI risks and legal frameworks applicable to AI. It deals with the question of how these systems can be integrated into practical workflows, which is much more difficult than simply buying a ChatGPT subscription and squeezing the model into existing workflows, without taking the concerns of end-users (translators, technical editors or other employees) into account.
The first step is to establish an AI culture within the company. In a study by Salesforce, more than half of generative AI users in international companies stated that they use AI tools without formal authorization. They reported that there are no guidelines from their employers on how to use these technologies, and many would have liked to receive corresponding training. This means that the AI culture was not yet established in these organizations - although things may have changed in the meantime. An AI culture requires a basic understanding of what tasks and problems in the organization could be solved by AI and how this can be done.
Many questions need to be answered, such as what processes need to be designed or redesigned, what the general level of automation is, where human added value is needed and which tasks could potentially be fully automated. And how does the European Union's new AI legislation fit into this context? There are ergonomic factors, such as organizational and cognitive ergonomics, that should be considered in order to introduce and implement AI adequately and in a responsible manner.
An AI provider must then be selected based on the appropriate AI model. There are thousands of models that can potentially be used: Closed source, open source, large or small models, and those that excel in different languages.
Translation studies and the translation profession are particularly well-versed in AI implementation, as they have been dealing with neural machine translation (NMT) at least since the invention of the transformer in 2017. They know how to design or not design processes when integrating NMT and are familiar with the various risks associated with NMT. Many of these insights from translation studies could be integrated into other workflows where language models are used.
Ethical and societal aspects
Ethical and social aspects constitute the final dimension of the model and go beyond the narrow focus on specific professional fields. This dimension deals with the question of what AI skills we need beyond our professional role as citizens in an AI-saturated society. For example, there is the question of whether AI could bring about social empowerment or disempowerment. Translators have already had unfortunate experiences of social disempowerment through NMT and other technologies - be it economic, loss of social status or loss of cultural capital. There is also a risk that these models will produce toxic results, as they are very powerful and can answer a variety of questions that a human would not be expected to answer. Safety barriers must therefore be put in place to avoid these toxic results. Another risk is manipulation. AI models are pretty good at mimicking voices. Robo-calls or emails created with this technology could trick people into accepting assignments that offer poor compensation and short deadlines, or they could be used to manipulate translators or interpreters. There is also a risk of epistemic violence or bias. AI models tend to misrepresent reality and reproduce social biases, such as gender bias. There is also a tangible and intangible substrate on which these AI technologies are based. They did not develop in a vacuum, and there are environmental costs. A significant amount of intellectual work has gone into developing the algorithms and compiling the training data. Again, these aspects have already been reflected by translation studies, which could also make important contributions to technology impact assessments with regard to the actual consequences of the implementation of these technologies at a societal level.
Planned didactic operationalisation of the framework
The next step involves the didactic operationalization of the framework in the spirit of the learning resources created by the DataLitMT project. One potential vehicle for this operationalisation is the European Union Erasmus+ consortium "LT Leader: Language and Translation: Literacy in Digital Environments and Resources". The aim of the consortium is to map the technological landscape in the translation sector and to create learning resources for the development of digital skills in relation to these technologies. The focus of these digital skills is, of course, on AI skills. This is because the technological landscape is becoming more and more dominated by large language models.
A Framework for Understanding Cognitive Biases in Technical Communication
A summary of the talk by y Prof. Dr. Quan Zhou (Metropolitan State University) at the International University Network in Technical Communication (IUNTC) on Thursday, 19th of September.
The following summary of the IUNTC meeting by Prof. Dr. Quan Zhou (Metropolitan State University) from the presents possible cognitive biases in technical communication based on the Retainment, Focus, Association, Compatibility, Prospect, Relatedness (RFACPR) framework, and discusses their relevance for technical documentation. These principles correspond to a timeline of a user's interaction with information. The user has the material in front of them, focuses on the present, directs their attention to it and then links it to the past and possibly anticipates the future.
Cognitive biases play a significant role in the use of instruction manuals as they influence the way users absorb, interpret and apply information. Cognitive biases are systematic thinking errors and mental shortcuts that affect our judgment and cause people to deviate from rational judgments and often reach inaccurate or irrational conclusions. They arise because our brain tries to process information quickly and efficiently. These distortions often remain unconscious, but influence our perception, behavior, decision-making and memory of events.
_____________________________________________________________________________________
Quan Zhou is a Professor and Chair of the Department of Technical Communication & Interaction Design at Metropolitan State University in the Twin Cities, U.S.A. He also directs the Design of User Experience graduate certificate. Quan’s current research focuses on the effects of cognitive biases in the design of information. He has published in Technical Communication, the IEEE Transactions on Professional Communication, and the Communication Design Quarterly. Quan holds a Ph.D. in Technical Communication from the University of Washington, Seattle.
September 2024 - written by Yvonne Cleary & Daniela Straub

Rentention
Users are often confronted with a variety of information, both helpful and unhelpful. Sometimes, they might retain too much or even the wrong information. This is the principle of retention. When a user reads a manual, the user may accept the amount of information they receive. Perhaps the information seems useful, visually interesting, or delivers overwhelming details. The user might then become overwhelmed and no longer able to distinguish what is helpful and what is not. The result is that they could be distracted and misled.
People also tend to base their judgments heavily on the first piece of information they receive or the outcome of a situation. This "anchor" information is seen as particularly important and is usually remembered – even if it is unimportant. Subsequently, helpful information may be randomly linked with less helpful information and thus anchored. People also tend to judge an event solely on the basis of its outcome, rather than considering the whole process or events that led to that outcome. If an outcome has been successfully achieved, one tends to judge the process of getting there as correct, without taking into account those steps that may have been flawed.
Focus and availability
Users often focus their attention on particularly conspicuous or central information and overlook other important details – this is called the focus principle. They are attracted to eye-catching information, which can distract them from other, possibly more important, information. The focus is then not on the most productive information, leading to a cognitive bias.
Users also tend to make judgments based on the information that is most easily retrievable – something we call the availability bias. This can occur, for example, when searching the internet if unimportant or even incorrect information is found quickly. In the context of technical communication, this could mean that a user of a product or software tends to favor the features or information they use more often or that are easily accessible in the documentation. Features that are rarely needed or not prominently displayed in the documentation may be perceived as less important or useful, even if they could be crucial for the successful use of the product.
In technical communication, writers might recall similar situations or use existing patterns they know and that are available, but that do not explain the problem at hand. Users might misunderstand certain intentions based on memory because they deviate from the conventions they are familiar with.
Association and compatibility
People tend to associate information or link it to previous experiences that are precisely or metaphorically compatible. This gives rise to the association principle and the compatibility principle. It refers to our tendency to look for coherent patterns and relationships in information in order to understand why something is happening. There is a danger that information is incorrectly placed to suggest a context or causal relationship. In addition, the users' prior knowledge or experience influences their interaction with the product, and they might ignore new information in the user manual.
Prospect and relatedness
The prospect principle refers to how we plan future actions based on these assessments of gains and losses. Many of us are so focused on avoiding losses that we often prefer the status quo, even when alternatives are potentially better.
The last principle of relatedness states that nowadays our decisions are increasingly influenced by the behavior of others as conveyed to us by mass and social media. Users are susceptible to the influence of their social environment. In the field of technical communication, this could mean that users allow themselves to be influenced by the opinions of others in forums or social media and thus rate certain product features or instructions as good or bad without having formed an informed opinion themselves. If, for example, the prevailing opinion in a community is that a particular function is useless or difficult to understand, new users could adopt this view even though the function may be explained simply and clearly in the technical documentation.
Avoiding cognitive biases
To avoid cognitive biases in technical communication, information designers and technical writers should ask themselves a series of questions that helps them to consciously reflect on their designs and recognize possible errors in users' thinking. Above all, they need to think about how they can achieve the desired result.
Knowledge of the target group is crucial here. It is important to understand the user's mental schemata and vocabulary. Possible cognitive biases could be addressed by asking the following questions:
- What prior knowledge does the user likely have on this topic?
- What events or images come to the user's mind as possibly related?
- What elements might trigger the user's associations?
- What preconceived experiences does the user have on this topic?
The design of the information is crucial:
- How does the information design direct the user's focus?
- Is the information complete?
- Has information or data sources been eliminated voluntarily, involuntarily, intentionally or accidentally?
To identify areas and variables that may have been overlooked, think beyond the available information and data sources:
- What information needs to be included to fully consider the context?
- How can we avoid incorrectly correlating different pieces of information or interpreting them as causal relationships?
- What visual strategies are best suited to enable the user to recognize real risks?
It is particularly important to pay attention to the anchor point:
- Is the user fixated on a specific point?
- Are they zooming in on something that is essentially irrelevant?
Techniques to reduce cognitive distortions
The following techniques help to reduce cognitive distortions:
- Reduce the amount of information: Too much information can make it difficult to distinguish between helpful and unhelpful content. It is important to reduce the amount of data presented to facilitate focus on relevant and reliable information.
- Conscious design: Information should be presented in a way that draws the user's focus to the important and relevant content. Avoid overly conspicuous or irrelevant details that distract the user's attention.
- Conscious use of visual aids: Visuals should be designed to prevent unconscious distortions such as the anchor effect, overestimation of results, or incorrect interpretation of causal relationships.
- Avoid mixing information: Don’t mix useful information with irrelevant or even misleading information as this could lead to false associations or distortions in memory.
- Multiple testing: User testing with different target groups using instructions and technical content can identify potential biases early on.
- Clear structure: Structure information logically and clearly so that users can quickly grasp the essential points without focusing on unimportant details.
- Framing and context: Present information clearly to prevent misunderstandings caused by the availability or anchoring of isolated facts.
- Encourage critical thinking: Question the origin as well as the accuracy of the information and data presented.
If technical writers and information designers are aware of how our brains process information, they can use targeted techniques to avoid cognitive distortions.
The Process Is Not the Job: A Technical Writer's Journey in a Regulated Industry - Improving Technical Writing in Regulated Industries: Insights from Gianni Angelini
A summary of the IUNTC Talk on June, 12, 2024, by Gianni Angelini
In this presentation, Gianni Angelini shared his personal experiences and insights gained as a technical writer in the medical device industry, focusing on the challenges and lessons learned. His journey illustrates significant professional growth and the evolution of technical writing practices within a highly regulated environment.
The presentation opened with a thought-provoking question: "What do we see?" This question sparked a discussion of Gianni's journey into the realm of medical device technical writing. Gianni entered this field in 2018, specializing in infusion pumps. These devices demand meticulous documentation due to their complexity and their critical role in healthcare settings.
_____________________________________________________________________________________
Gianni Angelini is an Italian senior technical writer with 14 years of experience in various industries (machinery, software, fire systems, and lately medical devices).
He lives in Ireland, where he earned a master's degree in Technical Communication from University Limerick.
Gianni has published a beginner's guide on the knowledge and skills required by modern technical writing for the Italian audience. He is currently working on his second book on technical communication, for an international audience.
In the meantime, he is enjoying a 1-year career break.
June 2024 - written by Yvonne Cleary & Daniela Straub
Challenges in Documentation
Initially, the manuals he encountered, and was required to update, lacked comprehensive instructions, especially concerning program changes within the pumps. A key challenge Gianni faced was insufficient detail in existing manuals regarding program switching. This gap hindered users' ability to operate the pumps effectively, prompting Gianni to rethink the documentation strategy.
Methodological Improvements
To address these issues, Gianni implemented a task-oriented approach. This involved enhancing the manuals with detailed instructions and reorganizing content for clarity and user-friendliness. His efforts not only improved current documentation but also, together with colleageus, he established a framework for future manuals.
Process vs. Job-Oriented Approach
Gianni recognized the prevalent cautious approach in regulated industries, which often leads to a mechanical, reactive mentality among technical writers. The risk of causing harm to users results in a very conservative approach to writing instructions, where technical writers tend to use the exact language of updates from SMEs, even if that language is unclear. They are more concerned with compliance than usability. Advocating for a user-oriented perspective, he emphasized the need for critical thinking and thoroughness in documentation practices.
Despite initial resistance, significant change occurred when a new system engineer introduced a requirements-based approach. This collaborative method enhanced accuracy and comprehensiveness in documentation, bridging the gap between engineers and technical writers.
Industry-Wide Trends
Gianni highlighted industry trends such as agile methodologies and a growing emphasis on critical thinking. These developments are reshaping technical writing practices in regulated environments, striving to balance regulatory compliance with innovation and usability.
Conclusion
Gianni's journey underscores the importance of technical writers engaging deeply with their roles, beyond mere procedural compliance. By adopting a job-oriented approach and prioritizing clarity and usability, technical writers can significantly enhance the quality of documentation, ultimately contributing to the safety and effectiveness of medical devices.
Empirical Studies in Translation and Interpreting: An Overview
A summary of the IUNTC Talk on Mai, 23, 2024, by Caiwen Wang
During the May IUNTC meeting, we had the pleasure of welcoming Dr. Caiwen Wang as our guest speaker. She is Senior Lecturer in Translation and Interpreting Studies at the University of Westminster and is Associate Professor in Translation and Interpreting at the Center for Translation Studies of UCL. Dr. Wang's expertise encompasses both theoretical and practical aspects of translation and interpreting, and she had amassed an impressive portfolio of publications, including books published with renowned publishers such as Bloomsbury and Routledge. Her research interests focus on translation and interpreting studies, and applied linguistics in general. During her presentation, Dr. Wang enlightened us on the critical aspects of designing empirical research in translation studies. She emphasized the importance of observing and describing real-world phenomena to uncover insights that contribute to a deeper understanding of translation and interpreting-. We were honored to have had Dr. Wang share her expertise with us on this occasion.
_____________________________________________________________________________________
Dr Caiwen Wang is a Senior Lecturer in Translation and Interpreting Studies in the School of Humanities of the University of Westminster and an Associate Professor in Translation and Interpreting at the Centre for Translation Studies of UCL, UK. She teaches translation and interpreting at both the theoretical and the practical level. Her research interests are translation and interpreting studies, and applied linguistics in general. She has published in leading T&I journals, such as Perspectives, Translation and Interpreting Studies, Across Languages and Cultures. Her recent co-edited book Translation and Interpreting as Social Interaction: Affect, Behaviour and Cognition by Bloomsbury (www.bloomsbury.com/9781350279315) is listed in the Bloomsbury Advances in Translation series, with Professor Jeremy Munday being the series' general editor. Her co-edited book Empirical Studies of Translation and Interpreting: The Post-Structuralist Approach by Routledge (https://doi.org/10.4324/9781003017400) is listed in the Routledge Advances in Translation and Interpreting Studies series.
Mai 2024 - written by Yvonne Cleary & Daniela Straub

Empirical research in translation and interpreting (T&I) has evolved significantly since the 1990s, with earlier developments in interpreting due to its roots in experimental psychology. Empirical studies typically follow an inductive approach, starting with observations and descriptions of real-world phenomena to generate general principles. These principles enable researchers to form hypotheses that are tested through further data collection, creating a cycle of induction and deduction.
Empirical research in T&I can be categorized in several ways:
1. Observational studies: These studies describe naturalistic data without interference from researchers. For example, analyzing metaphor translations in articles in "New Scientist" to identify patterns.
2. Experimental studies: Researchers set up experiments to control variables and simulate real-world conditions.
3. Survey studies: Researchers use questionnaires, surveys, interviews and focus groups to gather data.
Empirical studies may be qualitative, quantitative, or a combination of both.
- Qualitative studies in T&I often involve case studies with smaller samples, focusing on detailed descriptions and insights.
- Quantitative studies involve larger samples to identify general trends, using statistical analysis to test hypotheses.
Further distinctions include:
- Product-based studies: analyzing translated texts to establish “translation laws”.
- Process-Based studies: investigating cognitive processes of translators and interpreters using methods like think-aloud protocols or eye-tracking.
- Participant-based studies: study of the translator or interpreter in relation to the product or process, e.g. comparing the work of professional and trainee translators/interpreters.
- Context-based studies: study of social and cultural factors, e.g. examining interpreting during the COVID-19 pandemic.
Modern empirical research often integrates multiple dimensions, examining both product and process, or focusing on specific participant groups within particular social or cultural contexts.
Conducting Empirical Studies in Translation and Interpreting: The Research Process
The research process in empirical studies spans from formulating the research question to publishing the findings. This process involves several critical stages, each with its own set of considerations and challenges:
1. Research Design and Methodology:
In crafting a robust research design and methodology, several key considerations come into
play, starting with the formulation of research questions. These questions serve as the
foundation upon which the entire study is built, guiding the direction of inquiry and framing
subsequent investigations. In the structure of a research paper, the questions are likely to be presented in one of the following three locations:
- Introduction: While some scholars opt to introduce their research questions at the outset, there is debate regarding the timing of this approach, with concerns about premature disclosure.
- End of literature review: A prevailing practice involves presenting research questions subsequent to an exhaustive examination of existing literature. This sequencing allows for a contextualized understanding of the study's relevance and contribution.
- Separate section: Particularly in studies with multiple research questions, allocating a distinct section to articulate these inquiries can accentuate their significance and delineate their individual impact on the research endeavor.
2. Methodological Choices:
This stage involves selecting appropriate research tools, recruiting participants (if appropriate), and defining dependent and independent variables. Choosing the right instruments for data collection and analysis is critical for obtaining reliable results.
3. Data Collection and Analysis:
In the realm of data collection and analysis, two critical aspects ensure the validity and reliability of research findings:
- Data coding: Effective data analysis requires careful categorization and identification. Misalignment between data and categories can undermine the credibility of findings. A useful practice is having a second verifier review the data and categories to ensure accuracy and consistency.
- Empirical validity: An empirical study should be data-driven. However, some studies may appear empirical but lack substantive data analysis or present outdated literature reviews, weakening their arguments and findings.
4. Presentation and Publication:
The phase of presentation and publication of a research paper involves several key elements that should be structured effectively to communicate findings and engage readers. A typical research paper includes:
- Introduction: Sets the context and outlines the importance of the research.
- Literature review: Surveys existing research and identifies gaps.
- Methods: Details the research design and methodology.
- Results: Presents the findings from data analysis.
- Discussion: Interprets the results and their implications.
- Conclusion: Summarizes the research and suggests future directions.
- Effective presentation: Using varied visual aids (e.g., charts, diagrams) instead of relying solely on tables can enhance readability and engagement.
Conclusion
In summary, a well-designed research project should start with a clearly defined topic and research questions, use appropriate and rigorous methodologies, ensure accurate data coding and analysis, and present findings effectively. By addressing common issues and leveraging replication studies, researchers can significantly contribute to the field of translation and interpreting studies, ensuring robust and impactful results.
AI in Practice
A summary of the IUNTC Talk on March, 24, 2024, by Claudia Sistig and Prof. Dr. Jeremias Rößler, moderated by Prof. Sissi Closs
The most recent IUNTC event "AI in Practice" was an opportunity to delve into the current use of AI in TC practice. The event featured separate but complementary presentations by Claudia Sistig and Prof. Dr. Michael Rößler, and was moderated by Prof. Sissi Closs. The 49 attendees for the session joined in a lively question-and-answer session after the talks.
_____________________________________________________________________________________
Claudia Sistig, a graduate of the Technical Writing program at Hochschule Hannover, has been working as a Technical Writer at Compart GmbH since 2011. In 2016, she advanced to managing an agile development team as a certified Scrum Master. Beginning in early 2022, she has been engaged in exploring the potential of AI-based content generation and management in the field of technical writing.
Dr. Jeremias Rößler, Professor for IT and Media Management at the University of Applied Sciences in Karlsruhe and former founder and CEO of ReTest GmbH, is an award-winning author of the book "Artificial Intelligence and Software Testing", a well-known keynote speaker, and an enthusiastic software developer.
Sissi Closs, Professor of Information and Media Technology, Managing Director of C-Topic Consulting and inventor of Klassenkonzept-Technik®, is a software documentation pioneer and one of the leading experts in technical communication.
She is a member of the standards group responsible for revising the 2651x series of standards. She is involved in many areas at tekom, for example on the advisory board for standards, on the editorial advisory board of 'technische
kommunikation', on the European Academic Colloquium Review Board, as a speaker, author and reviewer
Mai 2024 - written by Yvonne Cleary & Daniela Straub
Harnessing AI in Documentation: Opportunities and Limitations
Artificial Intelligence (AI) presents a plethora of possibilities for documentation. From generating images to crafting code samples and text modules, AI streamlines various tasks, but demands nuance. While it is very effective for reviewing texts and suggesting links and topics, it is outstanding in areas like multilingual support and simplifying complex information for diverse audiences.
However, AI cannot be used without human intervention in several contexts, including creating legally binding documents and diagrams. Despite advancements, AI cannot currently provide accurate contextual information, often failing to provide crucial nuances needed for effective communication.
To optimize AI use, adjustments in documentation practices are imperative. Technical communicators have a role to play here in crafting content that can be more effectively parsed by AI, and therefore facilitating smoother integration into workflows, e.g. for powering chatbots and large language models. With effective input from technical communicators, e.g. to provide contextual cues, organizations can leverage AI's potential to enhance efficiency and accessibility, bridging gaps in communication across diverse linguistic and technical landscapes.
Optimizing Documentation for AI Integration: Key Insights and Best Practices
The implementation of AI in documentation and technical writing involves several key considerations. One crucial aspect is providing adequate context to enable the AI to identify the correct information. This involves adapting writing styles to be more AI-friendly, ensuring that the documentation includes specific details such as product names, operating system compatibility, and other relevant information to reduce errors and improve accuracy.
Moreover, for professionals such as technical writers, developing soft skills like problem-solving, adaptability, and continuous learning is essential. While technical communicators don’t need to become data scientists, having a foundational understanding of AI concepts, machine learning, natural language processing, and neural networks is beneficial. This understanding helps professionals grasp the capabilities and limitations of AI, allowing them to make informed decisions about integrating AI into workflows.
Given the rapid pace of AI development, staying updated with new models, features, and tools is crucial. Companies should establish AI processes and integrate AI into their workflows. This includes updating style guides to incorporate AI-friendly writing rules, ensuring that documentation is optimized for AI-based functionalities like chatbots.
In practice, AI setups may involve various tools and platforms, such as chatbots trained on documentation, image generation tools like Adobe Suite or Stable Diffusion, and AI assistants like AI Positron in Oxygen for documentation review. These setups may utilize indexes generated by Python scripts to provide context to answer user questions accurately.
Implementing AI effectively requires practical solutions that prioritize minimalism, and provide users with the information they need efficiently. By addressing these considerations and continuously adapting to advancements in AI technology, companies can leverage AI to enhance their documentation processes and improve user experiences. Technical communicators have an essential role in these developments.
The Future of AI in Software Development: A Glimpse into Automated Systems
As we navigate the evolving landscape of artificial intelligence (AI) in software development, a shift towards more automated systems becomes evident. Gone are the days of directly interfacing with AI models; instead, we're ushering in an era of specialized systems that harness the power of AI to deliver precise functionality.
Consider a scenario where a developer needs to implement a new feature on a large website. Instead of manually writing code, they interact with a sophisticated AI-powered system. By providing specific prompts, the system intelligently divides the task into smaller components and sends relevant requests to the AI model. This approach enables the AI to generate or adapt code across multiple files, seamlessly integrating the desired functionality into the existing codebase.
AI can handle a remarkable breadth of tasks within this framework. From generating HTML and CSS to crafting database queries, AI is versatile in tackling the full stack of software development. Developers are then free to focus on high-level design and innovation, rather than mundane coding tasks.
Looking ahead, the implications of this paradigm shift are profound. Software development is poised to become more accessible and democratized, allowing non-traditional developers to participate in creating custom solutions. Imagine a world where anyone can effortlessly generate software tailored to their needs, without the need for extensive coding expertise.
This democratization of software development may lead to a proliferation of customized solutions, replacing standardized platforms with bespoke applications. However, it does not suggest that software developers will be redundant. Just as the advent of LEDs dramatically increased the use of lighting, AI-driven software development could spark a surge in innovation and adoption.
In essence, the future of AI in software development promises not the replacement of developers, but rather an amplification of their capabilities. By harnessing the power of AI within specialized systems, a new era of creativity, efficiency, and customization in software development is possible.
AI-Driven Software Development: A Vision for the Future
In recent years, there has been a paradigm shift in software development, moving towards AI-driven solutions that streamline the creation process and enhance functionality. Rather than relying on a myriad of plugins for additional features, the future lies in instructing AI to tailor software to specific needs, cutting down on unnecessary complexity and bloated codebases.
At present, AI possesses the capabilities of a junior developer, but with ongoing advancements, it will likely evolve into a sophisticated software architect. This trajectory suggests a future where standard software becomes tailored to specific industries, such as banking or healthcare, ensuring compliance with rigorous safety standards while leaving room for customization in less critical applications.
Looking ahead, it is possible that users will simply communicate their needs to the AI, which will execute tasks seamlessly, from booking vacations to managing internal processes. To achieve this vision, AI must become more interoperable with existing systems, ensuring reliability and trustworthiness. Despite current challenges, companies are actively addressing these issues to pave the way for a more deterministic AI future.
In practical terms, AI-driven software development requires a shift towards more specialized processes tailored to specific tasks, enhancing communication between stakeholders and AI systems. While concerns about reliability and ownership persist, implementing safeguards such as multiple AI checks and transparent AI-generated labels can mitigate risks.
Transitioning towards AI-driven software development entails a learning curve, as teams adapt to new workflows and technologies. However, with dedication and a proactive approach to skills development, teams can use AI to change the way software is created and used.
In conclusion, AI-driven software development will transform the industry, offering efficiency, customization, and functionality. By embracing this paradigm, organizations can unlock new possibilities and drive innovation in the digital landscape.
Human augmentation in Technical Communication in the world of AI
A summary of the IUNTC Talk on February 29, 2024, by Nupoor Ranade
Abstract of the talk: In the talk, I plan to discuss practical ways for technical and professional communicators to engage in ethical AI development using familiar approaches such as User experience design, Human-in-the-loop using AI augmentation and Communicating for AI transparency. The humanities and AI are two fields that have traditionally been seen as separate and distinct. However, in recent years, there has been a growing interest in the intersection of the two. This is due in part to the increasing sophistication of AI systems, which are now capable of performing tasks (such as writing and editing, especially for creative purposes) that were once thought to be the exclusive domain of humans. Another aspect is the rising ethical concern about AI technology and its impact on society. Humanities scholars like technical communicators can get involved in the research on AI ethics in different ways. Some of those are as follows: 1) they can collaborate with technical experts to help breakdown complex concepts for easier interpretability, 2) they can use their community engagement skills for user and public advocacy, and 3) they can use critical methods to evaluate the ethical impact of AI to check for biases. I will demonstrate these ideas by describing my work on five different projects to highlight the scholarship in rhetoric, technical communication, and other humanities fields in general, and to invite relevant discussions.
_____________________________________________________________________________________
Dr. Nupoor Ranade is an Assistant Professor of English at George Mason University. Her research addresses knowledge gaps in the fields of technical communication practice and pedagogy, and focuses on professional writing and editing, inclusive design, and ethics of AI. Through her teaching, Nupoor tries to build a bridge between academia and industry and help students develop professional networks while they also contribute to the community that they are part of. Her research has won multiple awards and is published in journals including but not limited to Technical Communication, Communication Design Quarterly, Kairos, and IEEE Transactions on Professional Communication.
March 2024 - written by Yvonne Cleary & Daniela Straub

The Ambivalence of AI
Human augmentation explores collaboration with technology to enhance physical or cognitive abilities. However, the ambivalence of AI for humans becomes evident when we consider its differing perceptions in Sciences and Humanities. In scientific circles, AI is viewed as groundbreaking and revolutionary, largely because of opportunities and advances in research. This positive, transhumanist perspective emphasizes that humans should use technology to develop enhanced abilities beyond their natural limits. Consequently, the goal in technology development is to create "superpowers" that can significantly expand individual potential.
In the Humanities, there is concern about the impact of AI on traditional professional fields. The Humanities engagement with AI reflects on the relationship between humans and technology and how this collaboration can be improved. Specialists in writing and editing fear that automation threatens their prospects in content generation and social media communication. The younger generation in English studies is worried about their professional future in light of these developments.
This disparity of perspectives underscores the need for a nuanced examination of the impacts of AI. It is essential to address concerns in various fields while simultaneously recognizing and promoting the positive potential of AI.
Risks of Using AI
A crucial risk is the consequence of AI deployment, and the apparent lack of accountability. The impacts on societies worldwide could be significant when no accountability is established. Questions about transparency, the boundary between truth and falsification, ethical responsibility, and accountability in collaboration are central. How can we ensure that information is accurately conveyed, and the population learns to discern right from wrong, especially when users lack experience or context?
Governments and regulatory bodies have developed different approaches: the European AI Act emphasizes a risk-preventive approach, requiring developers and companies to disclose information about the risks of AI products. The UK's AI Action Plan emphasizes clear communication of the design of AI technologies to promote innovation. The US Executive Order focuses on data privacy, civil rights, consumer protection, innovation, and taking responsibility to ensure the safety and trustworthiness of AI technologies.
Ensuring safety and trustworthiness raises the question of how to promote understanding both of what the technology is capable of and also how it works. Trust requires communication, and technical communication plays a crucial role in the development and communication process of AI. If technical communication does not actively intervene in the development and communication of AI, numerous opportunities will be missed to make valuable contributions to the development of AI. Therefore, increased involvement of technical communication in the machine learning development cycle and reconsidering ways in which this can happen are urgently needed.
Human augmentation of technical communication for AI
Technical communication involves the ability to provide clear, consistent and factual information, especially when dealing with complex concepts. Technical communicators translate complex ideas into simple language. Customizing content for specific audiences ensures that the content is understandable for different users. The documentation development lifecycle plays a central role here, starting with planning and analyzing the target audience, followed by developing content using various tools and technologies. This is where the diverse contributions of technical communication in the development and use of AI and machine learning come in.
Technical Communication for AI Documentation
An important area where technical communication can contribute is in the documentation of AI, particularly addressing the complexity of ‘black box’ ideas. Increasing access and transparency of these complex ideas is important, both for users and regulatory authorities. For example, the introduction of bias in AI generated content is a concern. The discussion of how to avoid bias began in 2018 with the idea of documenting the technology development process in detail. This essentially means creating developer documentation for machine learning or AI technologies. All aspects should be captured in model cards, including training data, their composition, excluded elements, and the relevant population. The aim is to comprehensively document the entire development process of the machine learning model.
However, the reality of documentation reveals a discrepancy. Despite the rapid advancement of technology, there is a scarcity of scientific literature on AI documentation. The research literature broadly falls into three main areas: user requirements, standard development, and evaluation. Technical communication has long been engaged with these topics. However, there is often a lack of awareness in the developer and research community that this technical communication research already exists. Therefore, it is now time for technical communicators to actively engage in these discussions—not only regarding documentation as a byproduct but also concerning the development of AI technologies themselves.
Technical Communication for Prompt Optimization
The second area where technical communication can actively contribute is in rhetorical prompt engineering. This involves determining how to instruct a specific generative AI tool with a prompt to achieve the best results. Although there has been research on standards in prompt formulation for some time, there is no established formula that guarantees effective output, and this is because to-date, those addressing these issues do not come from the field of technical communication or technical writing.
The rhetorical situation, a theory from the 1960s by Bitser et al., states that the more specific or accurate one understands a communication situation, the better. Therefore, a comprehensive understanding of the current situation or problem is crucial to finding an appropriate response. This insight can be directly applied to create a formula for prompt engineering.
In an AI request, one submits the input text, the prompt, and there are various steps before seeing the output text. Due to the way machines learn, contextual embedding is crucial. Unlike human learning, where stories about situations, events, and people play a role, the machine learns by looking at word sequences or sentences that are close to each other. During the initial training of a Large Language Model (LLM) (e.g. for ChatGPT), the machine classifies words by collecting and categorizing publicly available internet content. When a request is made, the machine tries to determine which group the content falls into and then extracts relevant words and sentences. The problem with such classification systems is that there is no coherent story associated with the content. This is where human augmentation comes into play to effectively tell the machine, "Wait, let me work with you so that you give me the most accurate output."
When generating prompts, it is important to consider how the machine understands prompts. The more complete the context of the prompt, the better. Therefore, it should include rhetorical components such as: purpose, need, topic, author, and audience. Here, technical writers can leverage their knowledge of rhetorical situations to tailor information delivery for prompt engineering and develop prompt formulas for various conditions and cases.
The third area where technical communication can make a significant contribution is in the development of content itself.
The programmatic approach in teaching technical communication deals with curricular topics such as grammar, punctuation, and word selection for clarity and conciseness – areas that AI can handle well. In contrast, the rhetorical approach not only considers explicit words but also implicit contexts. For example, instead of using the term "race," the word choice "ethnicity" should be preferred, and in a software context, although it is commonly used, the word "kill" should be avoided as it evokes associations with warfare.
There is a need for more control over the process of contextual embedding of words. Technical communication can contribute by attempting to clean up data – a supervised machine learning process. Although developers can recognize problematic word pairings, teams often lack professionals who are knowledgeable about content design and can contribute to the model development process. Ethical editing and framing can be used for data preprocessing, fine-tuning, and content evaluation after the model's release. In this regard, technical communication plays a key role in ensuring the application of ethical principles and the qualitative improvement of the model.
Further Areas of Activity for Technical Communication
Technical communication can leverage implicit contributions, i.e., non-visible interactions of users, to enhance its contributions. In the software and documentation industry, technical communication uses infrastructures like GitHub to gain insights into the backend of documentation platforms and understand user issues. However, similar processes are lacking for AI tools, to understand how users use generated content and to what extent it is successful. Technical communication can contribute because we have mechanisms to understand user needs, improve products, and communicate effectively with users.
Another crucial aspect is equality and social justice. Technical communication traditionally advocates for user concerns and understands their needs and product usage. Technical communicators can contribute to creating diverse testing environments to ensure that machine learning models receive data from various relevant user groups. This contributes to more equitable and socially just technologies.
Insights into user assistance development at SAP
The latest meeting of the International University Network in Technical Communication (IUNTC) saw three SAP UA experts present their experience and insights.
IUNTC Meeting at the 8th of November 2023
Dominik Strauß, Heike Saam-Mourton, and Patricia Schler from SAP delved into the world of user assistance (UA) developers, exploring their roles, responsibilities, tools, learning opportunities, and the ideal candidate profile for this dynamic field. SAP is Europe’s biggest software company with a workforce of over 100,000 employees worldwide, about 1000 of them UA developers and in-house translators. And with 20+ million unique visitors of SAP Help Portal, one of the major publication channels for SAP UA, it is fair to say that the work of UA developers is well-consumed.
__________________________________________________________________________________________________________________________________________
Dominik Strauß, Heike Saam-Mourton, and Patricia Schler work in a central user assistance team at SAP SE, where they are responsible for communication activities for the UA community at SAP, as well as for recruiting and the internal curriculum for UA developers.

Chameleons at work
At SAP, user assistance is much broader than the traditional conception of technical writing. UA plays a pivotal role in helping users navigate complex systems, and every app and every feature comes with user assistance. To ensure users are provided with the right help when, where, and how they need it, UA developers take on a wide range of roles, such as documentation developer, UI text creator, terminology expert, blogger, content management system expert, software tester, search optimization expert, UX designer, multimedia creator, accessibility expert, and more.
SAP’s UA developers work in an agile development mode, often in scrum teams of ten. The content they create is just as varied as their roles, and ranges from legally required documentation and UI text to guided tours, graphics, videos, blog posts, and some nice-to-have content such as chatbots and podcasts.
To create and manage their content effectively, UA developers use a number of tools. These include, but are not restricted to:
- MadCap IXIA CCMS: A Content Management System (CMS) for content creation
- SAPterm: SAP’s own terminology database for maintaining SAP-specific terms
- MS PowerPoint and Visio: for image creation, for example diagrams or interactive graphics
- Camtasia: for video production, for example, how-to videos
- Semaphore: for metadata modeling, taxonomy, and ontology
Due to SAP’s diverse product portfolio, it is crucial that UA developers ensure user assistance deliverables are consistent across all offerings. Style guides and templates, as well as help in the form of support forums, are an important aspect in achieving this.
Focus on continuous learning
SAP places a strong emphasis on continuous learning and development of its workforce. The company offers a comprehensive internal curriculum organized into learning journeys. The curriculum is flexible, allowing individuals to tailor their learning paths based on their experience, educational background, team requirements, and personal preferences. New hires and experienced UA developers alike can benefit from the dedicated UA learning journeys to upskill and specialize in areas of interest, such as UX writing, conversation design, or analytics.
An ideal candidate? A jack-of-all-trades
When it comes to hiring UA developers, SAP would ideally like to see candidates with a degree in technical communication with experience in topic-based writing within an XML environment, familiarity with terminology work, a basic understanding of technology and business processes, and proficiency in creating meaningful graphics and other multimedia content.
Most important of all, however, are strong English language and writing skills. English is the primary language of SAP's UA content. It is also the source for translations into up to 40 languages for documentation and up to 80 languages for UI texts, which is another reason why applicants must have excellent English language skills.
Introducing iiRDS to universities: Gain insights and get involved.
iiRDS meets IUNTC on September 20, 2023
Ralf Robers, head of the documentation department at Körber Supply Chain Logistics GmbH and president of the iiRDS Consortium introduced iiRDS at the IUNTC webinar. The event was the kick-off for an exchange between the consortium and universities for closer cooperation on the topic of intelligent information for use.
September 2023 By Susanne Lohmüller

About iiRDS
iiRDS stands for intelligent information Request and Delivery Standard. The iiRDS Consortium consists of 26 members who are driving the standard forward as service providers, tool manufacturers or industrial companies and who are dedicated to information for use as part of digital business models. The Consortium looks forward to adding universities among its membership in the near future.
Ralf introduced iiRDS and navigated the participants through the changes in the users and authors perspective as well as the changes in content creation process and the consequences for technical communication like the growing importance of the semantic net. “With iiRDS we have a really good starting point for creating knowledge graphs.” By using iiRDS, companies can enrich and link information sources. The core metadata, which is available for free download at iirds.org, was visualized by Ralf. The standardization that iiRDS provides refers to standardized metadata and a standardized delivery format (RDF). The iiRDS Consortium took part in and Industry 4.0 InterOpera projects this year to develop a submodel of the Asset Administration Shell. The AAS is the basis for interoperability and is the digital twin of a machine or plant. Digital twin data in the administration shell describes the product of the physical world in the information world. So “iiRDS is a big piece of the future that we want to share with as many people as possible in the digital era”.
Next steps of iiRDS in university collaboration
The presentation was followed by questions and an exchange of ideas how iiRDS could be integrated in universities. The iiRDS Consortium offers to organize guest lectures to present iiRDS either at the interested university or as an online lecture. In addition, an iiRDS sub-working group is to be established, in which consortium members and universities will participate and drive the exchange. Meetings could be held monthly or every other month. Active participation in the working group would be a prerequisite for free membership in the consortium for the respective university. Universities and students can not only participate and help shape the exchange of expertise here, but they also gain access and contacts to companies, e.g. for final theses. The iiRDS Consortium plans to offer the iiRDS Online Training to a limited number of university staff or students for free participation in 2024.
If you are attending tcworld conference in Stuttgart, join us in the meet-up room on Tuesday, November 14 at 4:30 p.m. for a follow-up discussion and first meeting of the university collaboration sub-working group.
The iiRDS Consortium is open for ideas and interested universities are welcome to contact Susanne Lohmüller at the tekom-iiRDS Office.
What does ChatGPT mean for teaching technical communication?
Results from the first in person meeting of the International Network of Universities in Technical Communication
The first in-person meeting of the International Network of Universities in Technical Communication – IUNTC - took place in the late afternoon of May 4 at Karlsruhe University of Applied Sciences, Germany, preceding the European Academic Colloquium (EAC). About 20 people from all over Europe--university teachers, students and people interested in technical communication-- gathered to discuss important topics about studying and teaching technical communication. It was an excellent opportunity for networking, exchanging ideas, finding project partners and getting to know about the latest trends.
In keeping with the motto for the eighth European Academic Colloquium - the next steps in digital transformation - the question arose as to how ChatGPT will change teaching and learning. Generative AI technologies, such as large language models, have the potential to revolutionize much of our higher education teaching and learning.
May 2023 By Yvonne Cleary & Daniela Straub

There are many questions to solve, for example from the student‘s perspective, whether they are allowed to use ChatGPT for a seminar or final paper, or whether this practice constitutes cheating? How can they use ChatGPT best? What skills should they acquire, and what skills are obsolete because of ChatGPT? Teachers and lecturers also ask similar questions from a different perspective. They deal with questions like: which skills should be taught and how can students’ competencies be tested rather than their ability to prompt generative AI models. Also, how can ChatGPT and other systems based on generative AI be used to increase the efficiency of developing training, or even improve teaching. Challenges from the teacher‘s side include ensuring students do not simply copy and paste their work from ChatGPT and ensuring that students do not use ChatGPT as a tool for assignments. They have to focus how teaching courses, learning for exams, writing seminar papers and theses, and assessing student learning and performance will change because of the new technology.
In order to improve the participants’ understanding of ChatGPT, we had demonstrations in the meeting. ChatGPT is an artificial intelligence (AI)-based conversational agent that can generate various types of content, including college-level essays. Generative AI systems can generate text, images, or other representations with relatively little human input. They work most effectively when the prompt is precise and specific, so it is really important to formulate a good question, to which ChatGPT then "writes" the answer.
We - the participants of the IUNTC asked ChatGPT if it could write us an opening speech for the following day’s EAC. All participants were amazed at how good the result was. Since this was a vivid experience of how the technologies will also revolutionize technical communication, the group decided to show it at the opening of the EAC - a demonstration of how AI and digitization will change technical communication and teaching.
Hence, it is not surprising that the advent of generative AI fundamentally challenges accepted knowledge, assumptions, and behaviors in higher education. And one thing is also obvious: ChatGPT and GPT-4 are only the forerunners of what we can expect from future generative AI-based models and tools. So it is worth looking into the impact on higher education.
The next IUNTC meeting on Wednesday, 7th of June at 3:00 pm will continue to deal with this topic in detail: AI in academia: balancing benefits and challenges by Jenni Virtaluoto & Prof. Sissi Closs.
Another central question during the meeting was: how can we make technical communication more attractive as a field of study for more students? There was a suggestion on this topic from the University of Munich University of Applied Sciences to find project partners in the community for joint marketing campaigns. The problem that too few students are interested in the courses of study is common across several universities. However, it applies not only to technical communication, but also to other courses of study. In spite of the challenge of recruiting students, technical communication graduates are very attractive for the job market and can expect interesting job offers and a good salary. One challenge is that students may be reluctant to take technical subjects that involve a lot of technology. Current topics such as user experience design or information design may be more attractice. For this reason, many universities are adapting the content of their study programs. In addition, technical communication is not very well recognized as a profession in many countries. Targeted recruitment campaigns would have to start here. In addition, the desired target group - school leavers - is not easy to reach. It´s important to leverage social media and create a social media presence and to showcase technical communication’s relevance and importance which can attract more students who might be interested in pursuing a career in this field. Therefore tekom as the professional organisation for technical communication can help attract more students by providing networking opportunities and marketing campaigns.
The IUNTC meeting ended with a nice dinner together at pleasant summer temperatures in a beer garden in Karlsruhe.
What are the new ways of working for technical communicators?
Kai Weber led the most recent meeting of the International University Network in Technical Communication (IUNTC) on March 23, 2023. He asked about new ways of working for technical writers, as technical communication is undergoing fundamental changes due to automation, digitalization, and globalization. What are the effects of digitalization: How does it change the way we work?
March23 23 Text by Monika Vortisch

The group discussed what kind of changes technical writers have to deal with nowadays. The participants agreed that the working practices of technical writers have shifted considerably: communication, collaboration and shared learning are highly relevant today.
Communicating in agile teams, applying more frequent, less formal feedback and working in shorter increments of time for rapid or continuous delivery is gaining more and more importance. You must deliver something every day; testing is much shorter and more interactive.
These developments imply that technical writers need to acquire more and different competencies than before. “I was just surprised I had to develop skills I hadn’t before,” said one of the participants.
For technical editors, who used to belong to a separate department, this means a serious change. The backing of the doc team is missing; editors have to work independently and make decisions on their own. As this way of working requires a high level of personal responsibility, it means a big change for employees who are used to hierarchical structures.
The big advantage of agile teams: daily communication in the meetings gives all team members insight into each other's work. Technical writers often get direct feedback on their documentation and can then adjust it to improve the user experience.
Video conferencing or web meetings are extremely important for editors, as they usually have no access to the development environment and must rely on being part of the information flow.
In the context of agile development, only a few specifications are still available to editors as a source of information. Therefore, a good document management system helps to manage documents. Most technical writers see this as an advantage: products and their documentation are tested in a timely manner so that errors are discovered more quickly, and the editors' work can be assessed.
When working according to the waterfall principle, developers often have no way of recording their meetings with editors in terms of time. In Scrum mode, the creation of individual texts is a backlog item, which also strengthens the position of the editor in the team. A key takeaway from the meeting was that technical writers must have an open ear; their documentation might otherwise fall out of step. IUNTC participants thanked Kai for his interesting and insightful presentation.
Kai Weber is a lead technical writer for financial and banking software, SimCorp GmbH. He is a long-standing member of tekom and a regular conference speaker.
How to integrate research into the TC curriculum: an experience report and some lessons learned
Applied research is important: for supporting the practice of technical communicators and information designers, but also for the identity of the field. The high teaching load at universities of applied sciences, where programs of study for technical communication and information design are usually found, still leaves little leeway for research. In this context, Dr. Michael Meng, Professor of Applied Linguistics at the University of Merseburg, refers to the importance of impulses from teachers in order to advance research.
Dec.15 22 Text by Monika Vortisch

The general goals
“We have a research gap in Germany. For this reason, we need more graduates who are interested in research,” Dr. Michael Meng, Professor of Applied Linguistics at Merseburg University of Applied Sciences, said at the last IUNTC meeting on December 15. He has been working since 2012 in the department of economics and information technology at at Merseburg University of Applied Sciences, researching in the fields of technical communication, information design and psycholinguistics.
The professor sees the main reasons for this, among other factors, as the lack of time in universities of applied sciences. With 16 to 18 hours of teaching obligations, it is difficult to build and lead a research group. “However, it is also important to do more at universities to introduce students to research,” says Dr. Meng. The role and importance of research, according to Dr. Meng, must be demonstrated in order to spark students’ interest and allow them to actively collaborate. Teachers should continually ponder how to introduce students to research, interest them in it and enable them to conduct research.
For this purpose, a module of 4 semester hours per week (a total of 150 hours) was integrated into the third semester of the “Information Design and Media Management” master’s degree program at the University of Merseburg, dedicated to research and research methods specifically and exclusively.

Core contents of the module
Part 1 includes teaching the role and importance of research, general goals and approaches, the structure of research projects, how to proceed with quantitative, qualitative, and mixed-method studies, and the evaluation and presentation of results;
Part 2 includes planning and implementing a small research project in groups of 4-5 students and presenting it as a conference poster.
The central document and basis for planning the content and designing the exercises is Hayhoe & Brewer’s book “A Research Primer for Technical Communication” (2nd edition, 2021, Routledge, New York. DOI: https://doi.org).
What research topics do students work on in this module?
“Topics for student research projects are sometimes developed in cooperation with businesses. Other topics are oriented to studies from earlier research, attempting, for example, a complete or partial replication. Very frequently, however, students work on a topic they have freely chosen themselves,” explains Dr. Meng. The central challenge is, however, limiting the topics, as only a limited period of 15 weeks is available, and choosing a research design that can be realized within a semester.
According to Dr. Meng, the offering has been well received. The results are actually always of good or very good quality; exciting topics are found and worked on. “Now, we are hoping that one or another can also be found who would like to deepen their interest in research after completing the master's degree and take the path to a doctorate, for example in a doctoral center at the University at Merseburg!”
Please contact Prof. Michael Meng with any questions at the following email address: michael.meng@hs-merseburg.de
Master’s in Communication and Media: Introduction of a New Study Discipline at Pwani University in Kenya
Promoting the academic discipline of technical communication through international cooperation, leading to the development of a Master’s in Communication and Media at a university in Kenya, was the subject of the most recent IUNTC meeting on October 25th, 2022. The meeting was led by Professor Sissi Closs and Belinda Oechsler from the University of Applied Sciences, Karlsruhe (HKA), joined by colleagues from Pwani University (PU), Kenya--Edith Miano, Dr Rukiya Swaleh, Dr Elizabeth Munyaya, Alex Muthanga, and Prof. Yakobo Mutiti--and Dr Yvonne Cleary from the University of Limerick (UL), Ireland.
Oct. 20 22 Text by Yvonne Cleary

This project has a history dating back to 2014, when Professor Closs initiated an exchange project between HKA and PU. This project was called ADDI – ADaptive Digital Learning System. The focus was on Communication and Media, live student exchanges between HKA and PU, and the development of a digital platform to share content and collaborate: karlifi.org.
When in-person exchanges became impossible in 2021 due to the pandemic, a new project was initiated to enable virtual exchanges: VIEL (Virtual Intercultural Exchange and Learning). A third partner was included, UL. Students at the three universities get to know one another in weekly meetings online, attend workshops, and complete small group assignments (like making podcasts and short videos). Students receive a certificate and get to meet at one of the participating universities if no travel restrictions are in place. To-date, VIEL students and staff have met in Kenya (February 2022) and Ireland (July 2022). Both ADDI and VIEL were funded by BWSplus (the Baden-Württemberg-STIPENDIUM for University Students, a programme of the Baden-Württemberg Stiftung).
A third project in this cycle and by far the most ambitious, MCM (Master’s in Communication and Media) is funded by the DAAD (Deutscher Akademischer Austauschdienst –German Academic Exchange Service). Its goal is to introduce a new study programme in the discipline of technical communication at PU. The project includes various stages: curriculum development, accreditation of the programme at PU, implementation of a suitable IT environment and equipment, staff training, marketing of the programme in Kenya, and the establishment of a network to get support from local companies and to connect them with programme staff, students, and graduates.
Following a live kick-off event at PU in September 2022, a pilot programme is already underway with 12 students from PU taking Special Courses in Communication and Media. The pilot involves 11 external teachers, with most classes delivered virtually but some on-site. It is a very important phase in the project because the feedback from students and teachers will shape the delivery of the Master’s in Communication and Media, due to start in Autumn/Winter 2023.
The various connected projects have been successful because of funding from BWSplus and DAAD, a stable project team, excellent leadership, a high level of commitment on the Pwani side, and growing interest from students and lecturers.
To learn more about the projects, or to get involved, see:
Master’s in Communication and Media Website: mcmstudy.org
Karlifi Learning Platform: karlifi.org
Technical Communication Teaching Projects: Virtual Exchanges with the TAPP Project
Suvi Isohella from the University of Vaasa in Finland led a very engaging and well-attended session on the TAPP project, which supports international inter-institutional student projects in technical communication.
September 29 22 Text by Yvonne Cleary and Suvi Isohella

Collaborative international inter-institutional projects offer immense benefits for technical communication students and teachers. Students learn first-hand about the challenges and rewards of collaboration, virtual exchange, and intercultural communication. They also develop transferable skills that help to prepare them for the workplace. Teachers experience new and dynamic pedagogical approaches, and successful projects may lead to research, as well as teaching, partnerships. While many institutions encourage international inter-institutional exchanges because of these and other benefits, establishing contacts and projects is time-consuming and complex.
On Wednesday, September 21st, during the first International University Network in Technical Communication (IUNTC) meeting after the summer break, Suvi Isohella, University Teacher in Technical Communication at the University of Vaasa in Finland, led an engaging and informative discussion about the Trans-Atlantic and Pacific Project (TAPP). Suvi explained how TAPP supports international inter-institutional exchanges in technical communication.
The TAPP network was established in 1999/2000 by Dr Bruce Maylath and Dr Sonia Vandepitte. What began as an exchange between two universities has since grown to a network of 49 universities in 21 countries across five continents. The network coordinator maintains a database of project partners and project types, and helps individuals to find suitable partners, and to establish and run projects.
Suvi described three types of project supported by TAPP.
- Writing-translation projects, where writers prepare a source text for translation, and translators localise the text for the target audience.
- Bilateral translation-editing projects, where translators get feedback from editors about their translations (usually into English, which is not their first language).
- Multilateral projects, that reflect the workplace norm of cross-functional collaboration.
Students at the University of Vaasa have been involved in several multilateral projects since 2010. The Vaasa students have focussed on usability testing in many of these projects. For example, as illustrated in the figure below, they have worked with students from universities in Spain and the US to write, translate, and test instructions. They have also worked on multilateral projects to develop personas, and to create infographics.
Suvi explained that TAPP projects are successful because they are administratively uncomplicated. Projects do not depend on funding, and the teachers collaborate to design assignments that match their modules and learning outcomes. Of course, as with any inter-institutional collaboration, teachers must take several steps to ensure projects are successful. Students need detailed instructions and scheduling can be complicated, especially when multiple institutions are involved.
Suvi’s talk concluded with a lively discussion and question and answer session. This IUNTC meeting, like many others, was very well-attended and included participants from all over the world. Several members of TAPP were present, including Dr Patricia Minacori, Dr Bruce Maylath (outgoing coordinator) and Dr Ashley Petts (coordinator since August 2022).
You can learn more about TAPP and how to get involved, at: The Trans-Atlantic and Pacific Project | University of Houston-Downtown (uhd.edu).
Find the presentation slides here.
How do YOU search for existing knowledge?IUNTC Meeting Explores Search Strategies in Technical Communication
Dr Kim Sydow Campbell led the most recent meeting of the International University Network in Technical Communication (IUNTC) on June 14th, 2022.
In this meeting, Kim explored this question: “How do YOU search for existing knowledge?” This is an important question for researchers, who need to be sure they have access to the breadth of published research about a topic. It is an important question for industry practitioners, who need to ensure they can access the most up-to-date information for their work projects and practice. And it is an important question for teachers, who need to help students develop systematic and comprehensive search strategies.
June14 22 Text by Yvonne Cleary
During the first half of the meeting, Kim provided some background to the topic. Researchers in some fields (e.g. medical researchers) use a standard methodology for systematic literature reviews. Those reviews are then available as guidance to practitioners. In contrast, researchers in Business, Professional, and Technical Communication (BPTC) tend to have less systematic approaches when conducting literature reviews. They use multiple databases and, because of the expanding range of journals, they do not always know the extent and focus of recently published work. They conduct integrative literature reviews infrequently, and these are rarely available to practitioners. Thus, there is relatively little research-based guidance for BPTC researchers. There is also a lack of shared knowledge between academics and industry practitioners, and a lack of understanding about how trade publications and other industry sources are used in practitioner and academic research.
The second half of the meeting involved a lively and rich discussion among participants, both industry practitioners and academics, about our literature search methods. Some themes that emerged from that discussion included: learning from other disciplines like medicine, where the PRISMA framework is an accepted standard; the importance of including international journals in our searches; and the variety of sources with up-to-date content, including blogs, conferences, and dissertations. A key takeaway from the meeting was the need for shared guidance about where knowledge is held and how to access it.
Dr Kim Sydow Campbell is Professor of Technical Communication at the University of North Texas, USA. She has published several books and book chapters, and more than 40 peer reviewed journal articles. She was editor-in-chief of the IEEE Transactions on Professional Communication journal from 1998 to 2008. She is the recipient of multiple awards. This year, she was awarded the Society for Technical Communication’s Jay R. Gould Award for Excellence in Teaching Technical Communication.
How important is localizing user documentation?
A report from the latest meeting of the International University Network in Technical Communication (IUNTC)
May20 22 Text by Yvonne Cleary and Daniela Straub
Because markets for technical products have globalized, it is essential to consider the impact of cultural preferences on the usability and user experience of user instructions.
On Thursday, May 12 over 50 participants joined the latest meeting of the IUNTC, led by Dr. Joyce Karreman, Assistant Professor of Technical Communication at the University of Twente in the Netherlands. Joyce’s presentation was entitled ‘How important is localizing user documentation? Research on the effects of localization on users.’ The presentation discussed a series of experiments Joyce and her colleagues, Qian Li and Menno de Jong, have conducted to explore how Chinese and Western user instructions differ, and to determine the effects of cultural adaptations on Chinese and Western users.
The first study Joyce described was a content analysis of 50 Western and 50 Chinese household manuals. The analysis confirmed previous research findings that demonstrated differences in content, structure, and visuals. In terms of content, Joyce and her colleagues found that Western user documentation tends be highly instrumental: its purpose is to explain to end users how a device works. Chinese manuals are likely to be less purposeful and more entertaining. Chinese user documentation is not only meant for end users, but also for technicians, and may contain marketing as well as procedural information. Western documentation tends to be highly structured, e.g. using levels of headings, chunking, and lists. In contrast, in typical Chinese documentation, heading levels and lists are less likely to be consistent or prominent. Instructions may be presented in paragraphs rather than lists. Chinese manuals present expository information in an inductive structure (arguments first), whereas Western manuals present expository information in a deductive structure (conclusion first). In terms of visuals, again, Western instructional documents are functional, and visuals (e.g. technical drawings) are meant to clarify how a device works. In Chinese user documentation, cartoons and other entertaining or diverting visuals are used.
In a related study, Joyce and her colleagues interviewed twenty Chinese technical communicators, to explore their opinions about differences between Chinese and Western user documentation. The interviewees recognized the importance of culture and the different expectations of Western and Chinese users. They believed that Western users rely more on user documentation than Chinese users. Because the interviewees used style sheets and standardized tools, the research team concluded that Chinese manuals may become more like Western manuals in the future.
Joyce also described three user studies designed to explore the effects of Chinese and Western approaches to structure, expository information, and visuals. These studies were conducted with three participant groups: Chinese people based in China, Chinese people based in Europe, and Westerners based in Europe. The results of the study on document structure showed no significant differences in task performance or perceived usability. In the second study, examining the structure of expository text, both groups of Chinese participants preferred expository text presented inductively (with arguments first) for both readability and persuasiveness, while Western participants preferred information presented deductively. The third study examined the use of various types of cartoons and technical line drawings. The results were largely in line with expectations, that Chinese participants appreciated entertaining visuals while Western participants reacted more positively to technical visuals.
The overall conclusion of this work is that localization improves the user experience, but may not be essential for the usability of user documentation.
A lively question and answer session followed the talk, with questions and comments about how the studies had been conducted, demographics of participants, follow-up research, the hidden impact of culture, and an array of other topics. IUNTC participants thanked Joyce for her interesting and insightful presentation.
The next IUNTC meeting will be on Tuesday, June 14 at 15.00 (Central European Summer Time). Professor Kim Sydow Campbell from the University of North Texas will lead this meeting, entitled: “How do YOU search for existing knowledge? Listening to academics and practitioners in business/professional/technical communication across the globe”.
Further reading
Li, Q., De Jong, M. D. T., & Karreman, J. (2020). Cultural differences between Chinese and Western user instructions: A content analysis of user manuals for household appliances. IEEE Transactions on Professional Communication, 63(1), 3-20.
Li, Q., Karreman, J., & De Jong, M. D. T. (2019). Chinese Technical Communicators' Opinions on Cultural Differences between Chinese and Western User Manuals. ProComm 2019, IEEE International Professional Communication Conference, July, Aachen, Germany.
Li, Q., De Jong, M. D. T., & Karreman, J. (2021). Cultural differences and the structure of user instructions: Effects of Chinese and Western structuring principles on Chinese and Western users. Technical Communication, 68(1), 37-55.
Li, Q., Karreman, J., & De Jong, M. D. T. (2020). Inductively versus deductively structured product descriptions: Effects on Chinese and Western readers. Journal of Business and Technical Communication, 34(4), 335-363.
Li, Q., De Jong, M. D. T., & Karreman, J. (2021). Getting the picture: A Cross-Cultural Comparison of Chinese and Western Users’ Preferences for Image Types in Manuals for Household Appliances. Journal of Technical Writing and Communication, 51(2), 137-158.
Trends and demands in specialized technical communication
"New job profiles and new job titles in technical communication – how much agile project management does technical communication need?" This was the topic of the sixth online meeting of the International Network of Universities in Technical Communication (IUNTC).
March 2022 Text by Daniela Straub

Thirty-three participants joined the meeting, which started with an interesting lecture by Dr. Christiane Zehrer on “Agiles Methods and their meaning in todays working world”. Dr Zehrer is an agile practitioner (Certified Scrum Master) and a technical communication instructor at Hochschule Magdeburg-Stendal, Germany.
Currently, around 27.3 percent of employees in technical communication in Germany work in software companies. About three percent of all employees in a software company are technical writers. Software industry is a growing segment, with a growing ratio of technical writers in business units.
The work of technical writers in software development differs significantly from other industries. For one thing, they use other tools, e.g. component content management systems are less common in software development than in other industries. They also create other types of information for use, e.g. API (Application Programming Interface) documentation and embedded help. In addition, there is the digitization trend: With growing digitization, products, business models, value chains, customer behavior, and the workplace change. This also leads to radical changes in the work of technical communicators. As an example, agile project management methods are already common in most software companies, leading to a stronger interlink between product development and technical communication.
A Mentimeter survey of meeting participants showed that most are already familiar with agile project management methods, especially Scrum and Kanban. In contrast to classic project management, agile methods differ mainly in the shorter and more detailed planning phases. Task packages are kept as small as possible and scheduled for a fixed time. For example, daily standups are fixed elements: Teams coordinate their work through frequent, but usually short meetings. Sprint backlogs – a prioritized list of all those functions that are due at the end of the respective work period – are another important agile element. They consist of user stories, which describe the product from the user's point of view and focus on the task that the user wants to do. This provides a different perspective on the product. User stories are further devided into tasks, functions are secondary. User stories and tasks are usually visualized using material or a virtual agile board.
Thus, technical writers, who work in the software industry, are an integral part of the agile development process. The question in the group was whether employees of technical communication, especially in software development, need new skills such as training in agile methods, and whether the future would see an increased need for generalists or specialists. These questions were discussed in breakout rooms.
As a conclusion, the following was noted: Project management, time management, und team communication will play a particularly central role in future. Also technological trends for context-based information delivery are becoming increasingly important. So technical competencies will be required in the future, e.g. to be able to read code for testing and user experience design. But above all, skills in interpersonal communication are still essential. The self understanding as a technical writer changes to the “information developer”.