Texts And Tests 5 Free Pdf

1 view
Skip to first unread message

Narkis Eatman

unread,
Aug 4, 2024, 3:42:59 PM8/4/24
to subsrinwordtip
Inthe end, I rolled up my sleeves, got to work, and achieved reasonable test coverage. In fact, the tests have already paid off by uncovering various bugs and inefficiencies, so the effort was well-spent. Developing these tests was non-trivial, though, so here is an overview on why it is worthwhile to unit-test a full-screen console app and how to go about it.

The key insight, unsurprisingly, is to design for testability. I knew I wanted to unit-test the text editor from the beginning, so I had to come up with a design that allowed for this. Retrofitting tests into code that never considered testing is very difficult.


On Unix-like systems, the application writes special collections of bytes, known as escape sequences, to stdout. How the escape sequences look like and what they mean depends completely on what stdout is attached to. Again, if stdout is attached to a file, these sequences mean nothing more than a sequence of bytes. Contrariwise, if stdout is attached to a terminal, then the terminal interprets them and performs certain actions.


The most obvious escape sequence you can think of is the line terminator, or \n. On its own, \n is nothing more than ASCII byte 10. But when the terminal sees this byte, the terminal knows that it has to advance the cursor to the next line and move it to the first column. But the semantics of what \n does vary across systems: Windows interprets \n as just moving the cursor one line down, not rolling it back go the first column.


There are more complex escape sequences, of course. This is why you can trivially (assuming you know what terminal type you are talking to) embed escape sequences in printf or echo -e invocations to control how things look like. You essentially use escape sequences as a way to mark up the text:


We are trying to write tests for something that responds to user input, and every user interaction causes changes that are inherently visual and require the human eye for interpretation.


Second, we need to worry about user input. After all, the TUI is interactive and reacts to user key presses, so our tests will have to drive the TUI. This is easier than dealing with the visual console updates, as all we have to do is represent the sequence of key presses to send to the app and then feed those to it in some way. Again, this is not very different from other tests: we have an algorithm and we inject some input.


The alternative to this idea of comparing screen contents is to capture the console commands that the app emits and compare those to expectations. This is easier to implement but has different fidelity tradeoffs as we shall see below.


If we follow the ideas presented above, we will end up with a bunch of tests that inject key presses into the TUI and, for each of them, we will capture which console manipulation commands were emitted and we will compare them against expectations.


Corner-case and regression validation. In the scenario we are looking at, a lot of the editor behavior is obvious: if we press the right arrow key, we know that the cursor has to move one position to the right if the line of text permits. If we break the way this works, the breakage will be extremely visible once we do any kind of manual test.


Efficiency measures. The last benefit these tests give us is a way to measure efficiency. By capturing the sequence of commands emitted by the TUI logic, we can see if those commands are minimal. Because if they are not, the TUI will flicker.


The downside of this testing approach is that, again, we have no visual knowledge of how the screen looks like in the tests. Having the tests is thus insufficient to validate the TUI behavior: if we change the editor code, we will have to manually and visually inspect that our new changes behave accordingly. But the idea is that we will only need to do a minimal check-up for the new behavior once. After that, we can trust that our tests will catch unexpected changes that happen anywhere else.


Given this interface, our editor implements an event loop based on the return value of Console::read_key() and uses the generic Key representation to process either control operations (e.g. moving the cursor) and edit operations (e.g. actual character insertion).


And given this interface, supplying fake input data to our TUI is trivial. All we need to do is declare a MockConsole implementation with a mock read_key that, for each key press, yields a pre-recorded value taken from a sequence of golden key presses:


MockConsoleBuilder: A builder to construct the golden input that the MockConsole contains. This builder will let us accumulate input via separate calls using whichever representation makes more sense for the data at hand: either with add_input_chars() to record long sequences of characters without the Key::Char boilerplate or with use_input_keys() to precisely inject Key instances.


Let me conclude by repeating what I mentioned in the introduction: writing these tests has been super-valuable in: uncovering bugs in corner-cases; permitting me to clean up a lot of the original code duplication in the editor with confidence; and uncovering inefficiencies such as the editor issuing full refreshes when it should have been doing quick status updates.




Julio Merino

A blog on operating systems, programming languages, testing, build systems, my own softwareprojects and even personal productivity. Specifics include FreeBSD, Linux, Rust, Bazel andEndBASIC.


The LibreTexts libraries are Powered by NICE CXone Expert and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. Legal. Accessibility Statement For more information contact us at in...@libretexts.org.


This study proposes a classification method for multiple text reading test formats in English language proficiency tests. A preliminary study involving 11 proficiency tests revealed two tests that fit the scope of the main study. Results show that multiple text reading test formats use complementary texts rather than conflicting texts. As for questions in a set of test forms in multiple text reading test formats, cognitive processing on integrating the contents of texts was different in proficiency tests. Moreover, the type of connection formation required by the questions differed among the proficiency tests. Implications for pedagogy are presented.


Reading in real-life situations aims to understand texts from different sources, such as various authors and publishers (Karimi, 2015). Thus, understanding multiple texts is not an exception but is typical behavior in modern society. However, reading multiple texts is more complicated than reading individual ones. First, the structures of different texts deviate from the principles of consistency and cohesion. Unlike single texts, multiple texts do not explicitly state how individual texts should be associated; therefore, readers must consciously choose to integrate and interpret the information they contain (Anmarkrud et al., 2014). Second, integrating multiple texts requires the reader to infer connections between texts. To understand them in an integrated manner, readers must form connections by making inferences about individual texts that are free of consistency and cohesion (Schedl et al., 2021). Thus, understanding multiple texts is challenging, even for first language (L1) readers (Brten & Braasch, 2018).


Furthermore, since second language (L2) learners encounter various texts in both academic scenarios and everyday discourse, their multiple text comprehension skills affect their academic success and value formation in everyday life (Karimi & Richter, 2021). Therefore, acquiring these comprehension skills is critical for L2 learners.


In comprehending multiple texts, readers must integrate information from diverse viewpoints and scrutinize semantic correlations across a spectrum of texts (Karimi, 2015; List et al., 2021a; Rouet et al., 2021). The challenges of interpreting and comprehending multiple texts are significantly influenced by the relationships between them, encompassing instances of texts containing both complementary and conflicting ideas (Britt et al., 1999; Perfetti et al., 1999). For instance, when readers delve into two complementary texts, they formulate mental representations from a singular standpoint. After reading the first text and perusing the second text, the reader revises the constructed mental representation, aligning information from the second text with the initial one (Perfetti et al., 1999). Conversely, navigating conflicting texts entails a more intricate process (Kobayashi, 2014; Kurby et al., 2005; Rouet et al., 2021).


The discrepancy-induced source comprehension (D-ISC) model by Braasch and Brten (2017) posits that when confronted with conflicting texts, readers experience cognitive disequilibrium or imbalance due to incongruities in the presented information. Grappling with these conflicting texts challenges readers to cultivate a more profound and adaptable understanding of specific issues or subjects by considering diverse information sources (van den Broek & Kendeou, 2015). Nevertheless, discerning connections across texts and evaluating information from conflicting texts entails cognitively taxing and intricate processes, even for readers proficient in their first language (Anmarkrud et al., 2014). Without explicit guidance, they frequently overlook conflicting information and abstain from constructing coherent mental representations of the connections between the texts (Stadtler et al., 2020). However, limited test analysis research has focused on the connections readers must form when engaging multiple texts in proficiency tests.

3a8082e126
Reply all
Reply to author
Forward
0 new messages