Matching entries: 0
settings...
Farinola A (2023), "Hermeneutical postphenomenology: Computational tools and the lure of objectivity", Digital Scholarship in the Humanities., January, 2023. Vol. 38(3), pp. 1078-1087. Oxford University Press (OUP).
Abstract: This paper examines some of the issues surrounding this ‘computational turn’ and proposes a ‘human focused’ approach. It begins by addressing questions concerning whether there is a need empirical evidence in literary studies, and the roles played by human and technical agents in interpretative practices. It adopts Don Ihde Postphenomenological ideas (especially ‘embodiment’ and ‘hermeneutics’ human-technology relations) to expatiate on the nature of relationship that exists between a literary scholar and a computational tool during interpretative practices. On one hand, it uses postphenomenomelogy as a theoretical framework to provides rich conceptual terminologies by which we could interrogate humanists-computer relationship within the practice of computational textual analysis. And on the other hand, uses postphenomenology method to highlight the notion of subjectivity in contrast to the promise of observer-independent objectivity. The research appraise the impact of quantification and visualizations in literary studies using the research output of Franco Moretti and his colleagues at the Stanford Lit Lab, as well as performs textual experimentation on some corpora using Stefan Sinclair and Geoffrey Rockwell’s Voyant tools and Python NLP packages. But refutes the idea that quantification and visualization of textual data with the use of computational tools and methods could guarantee objectivity of textual interpretation in literary studies. The argument in this paper is divided across its three sections: The first section discusses the goal of reading; it concerns ‘Close reading’ and ‘Distant reading’. The second section questions the possibility of objectivity in textual interpretation using quantifiable and visual evidence provided as the output of the computational analysis of humanistic texts. Then, in the third section defends the following claims: a) the human person is the principal actor in the interpretation process, and all other forms of representation or visualisation of the text are meant to aid humans; b) data in the humanities are not limited to printed texts, but include digitised, born-digital and electronic text in various digital forms (images, sounds, videos, etc.); c) the humanities aim more at interpretive practices than a quest for verifiable knowledge; d. interpretative practices in the humanities focus on humans and their experiences; e. attention must be drawn to human developers’ subjectivity whenever we are using computational interpretative tools – on the ground that this will help in bracketing of our biases, prejudices, preconceptions, and theoretical frameworks. The paper concludes with the argument that computational tools used for textual analysis in the humanities need interpretation as they are not neutral in hermeneutic practices. It argues that the humans involved in the creation of those tools are prone to errors, have preferences, and incorporate their subjective ideas into developmental processes. Then proposes ‘Hermeneutical Postphenomenology’ as an ideal lens through which the claim to objectivity could be debunked. Then recommends that our productivity in textual scholarship can be enriched when we understand the true nature of the relationship between the human inquirer and technical agents within the cognitive assemblage.
BibTeX:
@article{Farinola2023HermeneuticalpostphenomenologyComputational,
  author = {Farinola, Augustine},
  title = {Hermeneutical postphenomenology: Computational tools and the lure of objectivity},
  journal = {Digital Scholarship in the Humanities},
  publisher = {Oxford University Press (OUP)},
  year = {2023},
  volume = {38},
  number = {3},
  pages = {1078--1087},
  doi = {10.1093/llc/fqac074}
}
Geoffrey Rockwell and Stéfan Sinclair (2022), "The Interface of HyperPo", Online.
Abstract: An annotated collection of screen shots of the HyperPo web-based text analysis software developed by Stéfan Sinclair. This document evolved out of the research Geoffrey Rockwell did for the panel at DH Unbound 2022 in honour of Stéfan Sinclair. He wanted to honour him by going back to his dissertation and talking about the genesis of many of his ideas for Voyant first developed in HyperPo. The document gathers some screenshots so as to document HyperPo.
BibTeX:
@misc{GeoffreyRockwell2022InterfaceHyperPo,
  author = {Geoffrey Rockwell and Stéfan Sinclair},
  title = {The Interface of HyperPo},
  howpublished = {Online},
  publisher = {University of Alberta Library},
  year = {2022},
  url = {https://era.library.ualberta.ca/items/8c9e217b-6d26-47c8-951e-fc214f1a174b},
  doi = {10.7939/R3-DTA4-K492}
}
Bonino G and Tripodi P (2021), "Distant Reading and the Problem of Operationalization. Goldilockean Considerations", COSMO. Vol. 18, pp. 187-196.
Abstract: The paper focuses on the role of operationalization (i.e., the building of models and the setting down of rules of annotation) in quantitative research in the humanities, and especially in the history of ideas. On the one hand, the presence of fully explicit annotation rules and fully operationalized concepts allows one to formulate claims that are clearly verifiable, or falsifiable, or in any case testable. On the other hand, full operationalization seems to have some controversial aspects: is it practically feasible? Is verifiability what we always want to achieve in the humanities? Are operationalized concepts semantically “well-anchored”?
BibTeX:
@article{Bonino2021DistantReadingProblem,
  author = {Bonino, Guido and Tripodi, Paolo},
  title = {Distant Reading and the Problem of Operationalization. Goldilockean Considerations},
  journal = {COSMO},
  year = {2021},
  volume = {18},
  pages = {187--196},
  url = {https://iris.unito.it/handle/2318/1811209}
}
Grant K, Dombrowski Q, Ranaweera K, Rodriguez-Arenas O, Sinclair S and Rockwell G (2020), "Absorbing DiRT: Tool Directories in the Digital Age", Digital Studies . Le Champ Numérique., June, 2020. Vol. 10(1) Open Library of Humanities.
Abstract: In the summer of 2017, Quinn Dombrowski, an IT staff member in UC Berkeley's Research IT group, approached Geoffrey Rockwell about the possibility of merging the DiRT Directory with TAPoR, both popular tool discovery portals. Dombrowski could no longer offer the time commitment required to maintain the organizational structure of the volunteer-run tool directory (2018). This decommissioning of DiRT illustrates a set of problems in the digital humanities around tool directories and the tools within as academic contributions. Tool development, in general, is not considered sufficiently scholarly and often suffers from a lack of ongoing support (Ramsay & Rockwell, 2012). When tool discovery portals are no longer maintained due to a lack of ongoing funding, this leads to a loss of digital humanities knowledge and history. While volunteer-based directories require less outright funding, managing and motivating those volunteers to ensure that they remain actively involved in directory upkeep requires a vast amount work to ensure long-term sustainability (Dombrowski, 2018). This paper will explore the difficult history of tool discovery catalogues and portals and the steps being taken to save the DiRT Directory by integrating it into TAPoR. In particular, we will: – Provide a brief history of the attempts to catalogue tools for digital humanists starting with the first software catalogues, such as those circulated through societies, and ending with digital discovery portals, including DiRT Directory and TAPoR. – Discuss the challenges around the maintenance of discovery portals – Consider the design and metadata decisions made in the merging of DiRT Directory with TAPoR.
BibTeX:
@article{Grant2020AbsorbingDiRTTool,
  author = {Grant, Kaitlyn and Dombrowski, Quinn and Ranaweera, Kamal and Rodriguez-Arenas, Omar and Sinclair, Stéfan and Rockwell, Geoffrey},
  title = {Absorbing DiRT: Tool Directories in the Digital Age},
  journal = {Digital Studies . Le Champ Numérique},
  publisher = {Open Library of Humanities},
  year = {2020},
  volume = {10},
  number = {1},
  url = {https://www.digitalstudies.org/article/id/7352/},
  doi = {10.16995/dscn.325}
}
Moretti F and Sobchuk O (2019), "Hidden In Plain Sight", New Left Review., August, 2019. Vol. II(118), pp. 86-115.
Abstract: What is the meaning of data visualization in the Digital Humanities? What does it hide, in its acts of revealing? Considerations of time and form, data and theory, as two researchers put their field in a comparative frame. If there is one feature that immediately distinguishes the digital humanities (dh) from the ‘other’ humanities, data visualization has to be it. Histograms, scatterplots, time series, diagrams, networks . . . ten, fifteen years ago, studies of film, music, literature or art didn’t use any of these. Now they do, and here we examine some premises (unspoken, and often probably unconscious) of this field-defining practice. Field-defining, because visualization is never just visualization: it involves the formation of corpora, the definition of data, their elaboration, and often some sort of preliminary interpretation as well. Whence the idea of this article: to gather sixty-odd studies that have had a significant impact on dh, and analyse how they visually present their data.1 What interests us is visualization as a practice, in the conviction that practices—what we learn to do by doing, by professional habit, without being fully aware of what we are doing—often have larger theoretical implications than theoretical statements themselves. Whether this has indeed been the case for dh, is for readers to decide
BibTeX:
@article{Moretti2019HiddenPlainSight,
  author = {Moretti, Franco and Sobchuk, Oleg},
  title = {Hidden In Plain Sight},
  journal = {New Left Review},
  year = {2019},
  volume = {II},
  number = {118},
  pages = {86--115},
  url = {https://newleftreview.org/issues/ii118/articles/franco-moretti-oleg-sobchuk-hidden-in-plain-sight}
}
Glaubitz N (2018), "Zooming in, zooming out: The debate on close and distant reading and the case for critical digital humanities", In Anglistentag 2017 Regensburg, Proceedings. Trier , pp. 1-10. WVT, Wissenschaftlicher Verlag Trier.
Abstract: Ever since Franco Moretti presented 'distant reading' as an approach that would render close reading obsolete, discussions on methods of 'digital humanities' in literary studies have centered on the perceived opposition of close and distant reading. Like other debates between methodological or theoretical camps, this discussion is fraught with polemics, simplifications and sometimes exaggerated claims and expectations (Herrnstein-Smith 2016, 72-73) – all the more because it is also a proxy debate for larger issues such as competition for funding, staff and public perception of the humanities in general (Liu 2012, 492). As the contributions to a recent issue of PMLA on distant reading show, however, the debate has also been welcomed as a chance to take stock of existing methods, concepts and practices guiding research and teaching in literary studies.

My paper will, first of all, clarify what 'close' and 'distant' have come to stand for in this debate, and try to describe more accurately what close and distant reading are. I will then discuss the metaphor of 'zooming in and out' which is frequently employed in computational literary studies to suggest a seamless integration of 'close' to 'distant' perspectives. I will argue that the metaphor of zooming, like other spatial metaphors, is neither particularly helpful in describing the method of computational text analysis nor in criticizing it (as I will show with respect to Matthew Jockers and Alan Liu). I will conclude with the arguments that an accurate description of the objects and methods of close and distant reading is a first step towards a productive critique of both approaches, and that the critical perspective often found missing in computational approaches could turn into the concern of critical digital humanities.
BibTeX:
@inproceedings{Glaubitz2018Zoominginzooming,
  author = {Nicola Glaubitz},
  editor = {Anne-Julia Zwierlein and Jochen Petzold and Katharina Boehm and Martin Decker},
  title = {Zooming in, zooming out: The debate on close and distant reading and the case for critical digital humanities},
  booktitle = {Anglistentag 2017 Regensburg, Proceedings},
  publisher = {WVT, Wissenschaftlicher Verlag Trier},
  year = {2018},
  pages = {1--10}
}
Miller A (2018), "Text Mining Digital Humanities Projects: Assessing Content Analysis Capabilities of Voyant Tools", Journal of Web Librarianship., July, 2018. Vol. 12(3), pp. 169-197. Informa UK Limited.
Abstract: Text mining is a method that aids in the analytic process and interpretation of research. Voyant Tools (voyant-tools.org) is an open source text-mining option that is user-friendly and well documented. This tool was chosen as a test study for one of the latest projects, entitled Trials and Triumphs, at Middle Tennessee State University. The Trials and Triumphs project has been reengineered with new content, themes, and connections relating to Tennessee’s history between 1865 and 1965. Transformations are not just the subject during this historic time period but are equally met with transformative technical upgrades to the project’s previous interpretative layout. Digital Scholarship Initiatives at Middle Tennessee State University’s Walker Library tested the application and use of Voyant Tools to determine whether its text analysis capabilities are well suited for the Trials and Triumphs revitalization project (now called Trials, Triumphs, and Transformations: Tennesseans' Search for Citizenship, Community, and Opportunity) and whether its interoperability with Drupal was worth pursuing. The author describes the results of this test study and consequently intends this article to be a practical guide for librarians or similar scholars who develop digital humanities projects, and who are interested in beginning a text mining project.
BibTeX:
@article{Miller2018TextMiningDigital,
  author = {Miller, A.},
  title = {Text Mining Digital Humanities Projects: Assessing Content Analysis Capabilities of Voyant Tools},
  journal = {Journal of Web Librarianship},
  publisher = {Informa UK Limited},
  year = {2018},
  volume = {12},
  number = {3},
  pages = {169--197},
  doi = {10.1080/19322909.2018.1479673}
}
van de Ven I (2018), "Too Much to Read? Negotiating (Il)legibility between Close and Distant Reading", In Legibility in the Age of Signs and Machines. Leyden, October, 2018. , pp. 180-196. BRILL.
Abstract: In this chapter, Inge van de Ven argues for the development of modes of reading that oscillate between ‘close’ and ‘distant’ reading. In order to move beyond the prevalent dichotomy between both reading strategies, the author proposes an alternative perspective that considers reading in terms of scale variance. This article outlines ways to combine classical- humanist attention to the singular object with methods applicable to variable scales of textuality. How does what we consider ‘legible’ change under the influence of digitization and datafication? How to decide what to read and what to outsource? How to combine hermeneutics with computation? Rethinking the interrelations between close and distant reading is of vital importance for teaching students how to read, and how not to read, in the information age.
BibTeX:
@inbook{vandeVen2018TooMuchRead,
  author = {Inge van de Ven},
  title = {Too Much to Read? Negotiating (Il)legibility between Close and Distant Reading},
  booktitle = {Legibility in the Age of Signs and Machines},
  publisher = {BRILL},
  year = {2018},
  pages = {180--196},
  doi = {10.1163/9789004376175_013}
}
Drucker J (2017), "Why Distant Reading Isn’t", PMLA. Vol. 132(3), pp. 628-635. Cambridge University Press.
Abstract: Distant reading is the computational processing of textual information in digital form. It relies on automated procedures whose design involves strategic human decisions about what to search for, count, match, analyze, and then represent as outcomes in numeric or visual form [the reading "model"]. (...) Data mining includes any activity of abstracting information to create or detect patterns. In digital processes, this activity uses algorithms that follow instructions about what to find, match, or count according to the parameters set by the model. (...) The results can be listed in a table, spreadsheet, graph, chart, or other mode of display. This data aggregation and display can be embedded in an interactive environment, where filters are applied to search selectively — by date, title, or other relevant information. The difference between a computational version of this process and a human one is largely scale. The mathematical and computational complexity of algorithms might make them hard for an individual to imitate, but, conversely, intuitive decisions that are part of human reading are difficult to specify algorithmically. When these decision-making processes are automated, they can be done at speeds, volumes, and scales (nano-, micro-, macro-) impossible for a human reader to match. Using algorithms to make decisions is, in essence, how the automated text analysis known as distant reading works.

Still, no matter how sophisticated the algorithms, they are all based on models designed as interpretative acts. The distinction between human engagements with symbolic codes and human engagements with machine ones need not be based on a romantic view of humanity but can be made on the terms the machines embody. Automated processes are different from those of a human reader, whose fallibility and vulnerability are crucial to the production of meaning. The machines are fallible too, of course—bugs, errors, and processing mistakes abound—but they are mechanical failures, not the inflected expression of individual thought projected onto and entwined in a work that is produced anew through every interpretative act. The distinction between mechanical and hermeneutic reading, between machine processing and cognitive engagement, between the automatic and the interpretative, between unmotivated and motivated encounters with texts, is essential. Processing is not reading. It is literal, automatic, and repetitive. Reading is ideational, hermeneutic, generative, and productive. Processing strives for accuracy, reading for leniency or transformation.

BibTeX:
@article{Drucker2017WhyDistantReading,
  author = {Johanna Drucker},
  title = {Why Distant Reading Isn’t},
  journal = {PMLA},
  publisher = {Cambridge University Press},
  year = {2017},
  volume = {132},
  number = {3},
  pages = {628--635},
  url = {https://www.jstor.org/stable/27037376}
}
Goldstone A (2017), "The Doxa of Reading", PMLA. Vol. 132(3), pp. 636-642. Cambridge University Press.
Abstract: Reading Franco Moretti’s graphs, maps, trees as a late-stage graduate student in 2008 was invigorating. Here was an approach to literary history free from the pieties of close reading, committed to empiricism, seeking to fulfill, with its "materialist conception of form," the promise of the sociology of literature (92). And, at the time, it seemed natural that the way to follow the path laid out by Moretti in Graphs and in the essays he had published over the previous decade was to go to my computer, polish my rusty programming skills, and start making graphs. Yet reconsidering Moretti’s Distant Reading now, one is struck by how nondigital the book is. In fact, the meaning of distant reading has undergone a rapid semantic transformation. In "Conjectures on World Literature," originally published in 2000, Moretti introduces the phrase to describe “a patchwork of other people’s research, without a single direct textual reading” (Distant Reading 48). Today, however, distant reading typically refers to computational studies of text. Introducing a 2016 cluster of essays called "Text Analysis at Scale," Matthew K. Gold and Lauren Klein employ the term to speak of “using digital tools to 'read' large swaths of text” (Introduction); in his contribution to the cluster, Ted Underwood embraces "distant reading" as a name for applying machine- learning techniques to unstructured text. Discussions of distant reading have become discussions of computation with text, even if no section of Distant Reading features the elaborate computations found in the Stanford Literary Lab pamphlets to which Moretti has contributed. [...]

Though the work subsequently done under the distant-reading rubric has been lively and varied, the changing meaning of the phrase has obscured rather than answered some of the most significant questions raised in Distant Reading and elsewhere about the methods and aims of literary study. Moretti’s work joined a current of historicist and sociological challenges to what I call, after Pierre Bourdieu, the doxa of reading: the assumption that the primary activity of academic literary study is textual interpretation. Under this assumption, “reading” includes both reading expertly and producing expert readings of texts in articles and books. Distant Reading sometimes challenges the centrality of reading to the study of literary and social systems. But this challenge has been blunted in two ways: it has been misrecognized in terms of a confrontation between “close” and “distant” techniques of “reading,” and it has been displaced from the research agenda by developments in the practice of distant reading, which I polemically summarize as textualization, driven by a return to national frameworks and the increasing predominance of the text corpus as analytic object. These developments, I show, are also anticipated in the latest essays in Distant Reading, but they should not be regarded as the natural outcome of Moretti’s challenge to the doxa of reading. By reformulating the proposal for a major disciplinary realignment as a question of what can be learned from computational readings of monolingual text corpora, the practice of distant reading has yet to meet the strongest demands of Distant Reading.
BibTeX:
@article{Goldstone2017DoxaReading,
  author = {Andrew Goldstone},
  title = {The Doxa of Reading},
  journal = {PMLA},
  publisher = {Cambridge University Press},
  year = {2017},
  volume = {132},
  number = {3},
  pages = {636--642},
  url = {https://www.jstor.org/stable/27037377}
}
Hammond A (2017), "The double bind of validation: distant reading and the digital humanities’ “trough of disillusionment”", Literature Compass., August, 2017. Vol. 14(8) Wiley.
Abstract: The digital humanities (DH) is currently in the phase of the hype cycle known as the trough of disillusionment. Franco Moretti, perhaps the most prominent practitioner of the most prominent discipline of DH — distant reading, the computational analysis of large quantities of literary texts—recently expressed his exasperation with the state of DH, reflecting our work could have been better and asking why, considering the amount of energy, talent, and tools, going into [DH], that we have such difficulty producing great results. Surveying leading recent work in distant reading by Moretti, Matthew L. Jockers, Laura Mandell, Ryan Heuser, Long Le Khac, and Joanna Swafford, this paper provides a twofold explanation to the field's failure to produce great results. Both explanations relate to validation, the process by which quantitative results are shown to be reliable and trustworthy. Many distant reading projects have produced disappointing results because they have been more interested in validating their tools—showing that their computational methods are able to confirm existing stereotypes — than in pursuing genuine discoveries. Many others, meanwhile, produce provocative results that cannot be meaningfully validated. Although the double bind of validation is real, I propose collaboration and interdisciplinary adaptation as promising solutions.
BibTeX:
@article{Hammond2017doublebindvalidation,
  author = {Hammond, Adam},
  title = {The double bind of validation: distant reading and the digital humanities’ “trough of disillusionment”},
  journal = {Literature Compass},
  publisher = {Wiley},
  year = {2017},
  volume = {14},
  number = {8},
  doi = {10.1111/lic3.12402}
}
Mayer A (2017), "Hermeneutica, une expérience numérique de l'interprétation : Hermeneutica. Computer-assisted interpretation in the humanities, de Geoffrey Rockwell et Stéfan Sinclair", Sens public., mar, 2017. Consortium Erudit.
Abstract: Avec Hermeneutica. Computer-assisted interpretation in the humanities (MIT Press, 2016), Geoffrey Rockwell et Stéfan Sinclair s’interrogent sur les transformations de l’interprétation de textes dans le milieu numérique. En particulier, au travers d’une méthodologie hybride faisant dialoguer réflexions et exemples, théorie et pratique de l’interprétation, ils réfléchissent à ce que les outils d’analyse textuelle assistée par ordinateur révèlent et infléchissent dans l’activité herméneutique. Au cœur de l’essai se trouve l’outil Voyant, espace numérique de quantification et de visualisation textuelle développé par les coauteurs, qui leur sert de matériau pour aborder les mutations contemporaines de la lecture dans les sciences humaines.
BibTeX:
@article{Mayer2017Hermeneuticauneexperience,
  author = {Ariane Mayer},
  title = {Hermeneutica, une expérience numérique de l'interprétation : Hermeneutica. Computer-assisted interpretation in the humanities, de Geoffrey Rockwell et Stéfan Sinclair},
  journal = {Sens public},
  publisher = {Consortium Erudit},
  year = {2017},
  doi = {10.7202/1048829ar}
}
Moretti F (2017), "Patterns and interpretation", Stanford LitLab Pamphlet.
Abstract: One thing for sure: digitization has completely changed the literary archive. People like me used to work on a few hundred nineteenth-century novels; today, we work on thousands of them; tomorrow, hundreds of thousands. This has had a major effect on literary history, obviously enough,1 but also on critical methodology; because, when we work on 200,000 novels instead of 200, we are not doing the same thing, 1,000 times bigger; we are doing a different thing. The new scale changes our relationship to our object, and in fact it changes the object itself. [...] Meaning is not one of the things literary critics study; it is the thing. Here lies the great challenge of computational criticism: thinking about literature, removing meaning to the periphery of the picture. But of course this is also the great challenge for computational criticism: you discard meaning and replace it with – what?
BibTeX:
@misc{Moretti2017Patternsinterpretation,
  author = {Franco Moretti},
  title = {Patterns and interpretation},
  booktitle = {Stanford Literary Lab: Pamphlets ; 15},
  howpublished = {Stanford LitLab Pamphlet},
  year = {2017},
  number = {15},
  url = {https://litlab.stanford.edu/LiteraryLabPamphlet15.pdf}
}
Erb M, Ganahl S and Kilian P (2016), "Distant Reading and Discourse Analysis", Le foucaldien., June, 2016. Vol. 2(1), pp. 8. Open Library of the Humanities.
Abstract: With the publication of this special issue, Le foucaldien continues its experiment of updating the thought of Michel Foucault. Can historical discourse analyses be carried out with the aid of computers? In order to examine this question, we compare Franco Moretti's Distant Reading with Foucault's archaeological method. Despite their common origins in the French Annales School, the two approaches differ fundamentally. While Moretti interprets literary data by means of social history, Foucault seeks the immanent meaning of discourses. Our preliminary conclusion: digital archaeology appears to founder on the operationalization of the complex concept of the statement (énoncé).
BibTeX:
@article{Erb2016DistantReadingDiscourse,
  author = {Erb, Maurice and Ganahl, Simon and Kilian, Patrick},
  title = {Distant Reading and Discourse Analysis},
  journal = {Le foucaldien},
  publisher = {Open Library of the Humanities},
  year = {2016},
  volume = {2},
  number = {1},
  pages = {8},
  doi = {10.16995/lefou.16}
}
Rockwell G and Sinclair S (2016), "Hermeneutica: Computer-Assisted Interpretation in the Humanities" Cambridge, MA The MIT Press.
Abstract: Hermeneutica is a story about methods of interpretation. It is a story of a return to dialogical practices that predate Descartes and an explanation of our turn to the computer-assisted methods that are becoming hermeneutically interesting with the digitization of the human record. [...] While working on Hermeneutica we modeled a collaborative practice, Agile Hermeneutics (AH), loosely on a programming methodology called Extreme Programming (XP). [...] Traditional programming wisdom emphasized the need for careful analysis and specification before coding. XP recommends trying something early without a lot of specifications; XP also recommends rapid iterations as experiments to evolve specifications, incremental development, and continual reflection. Rather than analyzing the big picture and fully planning the final product before beginning, agile programmers code one version, reflect, and begin again. They work toward what is needed, as opposed to what was specified. Often that means throwing out code and starting all over when functionality calls for redesigned data structures. Whereas traditional wisdom holds that rewriting code is a sign of failure, XP makes it a productive and instructive part of the process. [...] AH is pragmatic. Small experiments generate hermeneutical theories as the products of interpretation: texts and tools. Code is quickly hacked to test an idea. Methods, and their instantiation in tools, are discussed reflexively throughout the experiment. Above all, where the Cartesian practices involve reflection and talking with the self, AH is about talking with another person who has complementary skills and summarizing those conversations in various ways. [...] Hermeneutica is also part of a movement that integrates method and interrogation. We see development as a form of research. Our research is simultaneously about how we might think (to echo a methodological formulation by Vannevar Bush) while thinking through prototyping, coding, documenting, and testing with real questions.16 It is a particular type of research craft in which one of the important outcomes is a re-imagination of how research tools should be designed to fit into the cycle of research.
BibTeX:
@book{Rockwell2016HermeneuticaComputerAssisted,
  author = {Rockwell, Geoffrey and Stéfan Sinclair},
  editor = {Stéfan Sinclair},
  title = {Hermeneutica: Computer-Assisted Interpretation in the Humanities},
  publisher = {The MIT Press},
  year = {2016}
}
Underwood T (2016), "Distant Reading and Recent Intellectual History", In Debates in the Digital Humanities. Minneapolis, MN, May, 2016. , pp. 530-533. University of Minnesota Press.
Abstract: Distant reading is better understood as part of a broad intellectual shift that has also been transforming the social sciences. The best-publicized part of this shared story is an increase in the sheer availability of data, mediated by the Internet and digital libraries. Because changes of scale are easy to describe, journalists often stop here — reducing recent intellectual history to the buzzword “big data.” The more interesting part of the story is philosophical rather than technical, and involves what Leo Breiman, fifteen years ago, called a new “culture” of statistical modeling (Breiman). The conceptual premises informing models may at first seem arcane, but they’re playing a crucial role behind the scenes: this is the fundamental reason why disciplines that used to seem remote from humanists are now working with us on shared problems. In the twentieth century, the difficulty of representing unstructured text divided the quantitative social sciences from the humanities. Sociologists could use numbers to understand social mobility or inequality, but they had a hard time connecting those equations to the larger and richer domain of human discourse.

Over the last twenty years, that barrier has fallen. A theory of learning that emphasizes generalization has shown researchers how to train models that have thousands of variables without creating the false precision called “overfitting.” That conceptual advance would be interesting in itself. But it also allows researchers to include qualitative evidence like text in a quantitative model by the simple expedient of using lots of variables (say, one for each word). Social scientists can now connect structured social evidence to loosely structured texts or images or sounds, and they’re discovering that this connection opens up fascinating questions. Humanists are discovering the same thing. Distant reading may have begun with familiar forms of counting akin to book history. (How many novels were published in 1850?) But much of the momentum it acquired over the last decade came from the same representational strategies that are transforming social science. Instead of simply counting words or volumes, distant readers increasingly treat writing as a field of relations to be modeled, using equations that connect linguistic variables to social ones. Once we grasp how this story fits into the larger intellectual history of our time, it no longer makes much sense to frame it as a debate within literary studies. Th e change we are experiencing is precisely that quantitative and qualitative evidence are becoming easier to combine, blurring disciplinary boundaries. We’re working on a methodological continuum now that extends from history and literature through linguistics and sociology. Scholars are still free to specialize in parts of the continuum, of course, and specialization is still valuable. But nothing prevents us from ranging more widely. Since human affairs are also a continuum, we should feel free to use whatever mixture of methods gives us leverage on a particular problem.
BibTeX:
@inbook{Underwood2016DistantReadingRecent,
  author = {Underwood, Ted},
  title = {Distant Reading and Recent Intellectual History},
  booktitle = {Debates in the Digital Humanities},
  publisher = {University of Minnesota Press},
  year = {2016},
  pages = {530--533},
  doi = {10.5749/j.ctt1cn6thb.47}
}
Sinclair S and Rockwell G (2015), "Text Analysis and Visualization", In A New Companion to Digital Humanities. , pp. 274-290. John Wiley & Sons, Ltd.
Abstract: Summary The analytical practices of the digital humanities are becoming ubiquitous as digital textuality continues to surround and overwhelm us. This chapter is an introduction to thinking through the analysis and visualization of electronic texts. We start by ask again what an electronic text is in the context of analysis. Then we look at how analysis takes apart the text to recombine it in ways that let you reread it for new insights. A concordance would be an example of such a recombination. Finally we discuss how interactive visualizations extend recombination to bear meaning.
BibTeX:
@inbook{Sinclair2015TextAnalysisVisualization,
  author = {Sinclair, Stéfan and Rockwell, Geoffrey},
  title = {Text Analysis and Visualization},
  booktitle = {A New Companion to Digital Humanities},
  publisher = {John Wiley & Sons, Ltd},
  year = {2015},
  pages = {274-290},
  url = {https://onlinelibrary.wiley.com/doi/abs/10.1002/9781118680605.ch19},
  doi = {10.1002/9781118680605.ch19}
}
Ascari M (2014), "The Dangers of Distant Reading: Reassessing Moretti’s Approach to Literary Genres", Genre., April, 2014. Vol. 47(1), pp. 1-19. Duke University Press.
Abstract: Moretti’s research in the field of digital humanities is a welcome addition to other forms of literary inquiry. What worries me is Moretti’s tendency to regard distant reading as objective, within the framework of a purportedly scientific approach to the humanities,3 which might be more aptly described as pseudo- scientific. As we have seen, Moretti’s (2000a, 57) distant reading is meant to overcome close reading, a practice that he describes as a “theological exercise,” therefore as a relic of a former age. Following his enthusiasm, we may come to regard distant reading as a liberation from the constraints of a time in which interpretation came to coincide with revelation and meaning was associated with the univocal nature of authorial intention rather than with the multiple encounters between a text and its readers, but does distant reading really provide a liberating and democratic approach to literature?4 My contention is that far from opening new perspectives, distant reading may actually blunt our critical faculties, inviting us to inadvertently adopt biased views of literature under the mask of objectivity.

The central part of my essay is aimed at revealing precisely the pseudoscientific nature of distant reading as applied to the analysis of literary genres. Before proceeding to an inevitably close reading of Moretti’s theories, let me add that Moretti has proved capable of opening new avenues to literary inquiry, and we should be grateful to him for this inventiveness. I only wish to call attention to the dangers an uncritical application of scientific metaphors to literature may produce. I hope my attitude will not be regarded as irreverent.
BibTeX:
@article{Ascari2014DangersDistantReading,
  author = {Ascari, Maurizio},
  title = {The Dangers of Distant Reading: Reassessing Moretti’s Approach to Literary Genres},
  journal = {Genre},
  publisher = {Duke University Press},
  year = {2014},
  volume = {47},
  number = {1},
  pages = {1--19},
  doi = {10.1215/00166928-2392348}
}
Orlemanski J (2014), "Scales of Reading", Exemplaria: revista de literatura comparada., June, 2014. Vol. 26(2–3), pp. 215-233. Informa UK Limited.
Abstract: We read according to different scales: fast or slow, selective or thorough, deeply or skimming along. What is at stake in scalar variations like these? Quite a lot, this essay argues. The contrast between close reading and distant reading, as formulated by Franco Moretti, raises provocative challenges for the discipline of literary study. Moretti’s claim that knowing is not reading exemplifies the current devaluation — and undertheorization — of the kind of knowledge that close reading produces. Distant reading converges with recent critiques of historicism, experiments in machinic reading, and narratives of the turn away from the linguistic turn to foreground the epistemological limits of interpreting individual texts. In light of these challenges, I advocate the reexamination of precisely what we learn through close reading. This project, of articulating anew the terms of literary-historical understanding, may be aided by resources of the hermeneutic tradition — although the possibility seems out of step with current antihermeneutic ideas. I argue that scale remains a literary event and that units of analysis derive from processes of interpretation. We need disciplinary accounts that incorporate both intrinsic and extrinsic sources of meaning and that explain why experiences of reading have validity as sources of knowledge.
BibTeX:
@article{Orlemanski2014ScalesReading,
  author = {Orlemanski, Julie},
  title = {Scales of Reading},
  journal = {Exemplaria: revista de literatura comparada},
  publisher = {Informa UK Limited},
  year = {2014},
  volume = {26},
  number = {2–3},
  pages = {215--233},
  doi = {10.1179/1041257314z.00000000051}
}
Ross S (2014), "In Praise of Overstating the Case: A review of Franco Moretti, Distant Reading (London: Verso, 2013)", Digital Humanities Quarterly. Providence Vol. 8(1)
Abstract: This review of Franco Moretti's Distant Reading summarizes Moretti’s major arguments within the larger context of recent debates in the digital humanities. Particular attention is given to Moretti’s uptake of Immanuel Wallerstein, to his controversial critique of close reading, and to the variety of digital-humanistic methods that comprise Moretti’s quantitative formalism. Most valuable as an artifact of literary-critical history rather than a how-to guide or theoretical treatise, this hodgepodge of essays is at its best as an audacious and defensive academic memoir tracing Moretti’s transformation into a digital humanist. As Moretti champions the broad explanatory power of quantitative literary analysis, he overestimates the scientific objectivity of his analyses while undervaluing the productively suggestive stories of doubt, failure, and compromise that lend nuance and depth to his hypotheses. Combative, absorbing, highly topical, and unevenly persuasive, Distant Reading embodies both the optimism of early digital literary studies and its perils.
Comment: Copyright - © 2014. This work is published under http://creativecommons.org/licenses/by-nd/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Última atualização em - 2023-11-22
BibTeX:
@article{Ross2014PraiseOverstatingCase,
  author = {Ross, Shawna},
  title = {In Praise of Overstating the Case: A review of Franco Moretti, Distant Reading (London: Verso, 2013)},
  journal = {Digital Humanities Quarterly},
  year = {2014},
  volume = {8},
  number = {1},
  url = {http://link.periodicos.capes.gov.br/sfxlcl41?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&genre=article&sid=ProQ:ProQ%3Apubliccontent&atitle=In+Praise+of+Overstating+the+Case%3A+A+review+of+Franco+Moretti%2C+Distant+Reading+%28London%3A+Verso%2C+2013%29&title=Digital+Humanities+Quarterly&issn=&date=2014-01-01&volume=8&issue=1&spage=&au=Ross%2C+Shawna&isbn=&jtitle=Digital+Humanities+Quarterly&btitle=&rft_id=info:eric/&rft_id=info:doi/}
}
Moretti F (2013), ""Operationalizing" or the Function of Measurement in Modern Literary Theory". Thesis at: Stanford University (Literary Lab). , pp. 1-15.
Abstract: An uncommonly ungainly gerund, “operationalizing” is nevertheless the hero of the pages that follow, because it refers to a process which is absolutely central to the new field of computational criticism, or, as it has come to be called, of the digital humanities. (...) the operational approach refers specifically to concepts, and in a very specific way: it describes the process whereby concepts are transformed into a series of operations—which, in their turn, allow to measure all sorts of objects. Operationalizing means building a bridge from concepts to measurement, and then to the world. In our case: from the concepts of literary theory, through some form of quantification, to literary texts.
BibTeX:
@techreport{Moretti2013OperationalizingFunctionMeasurement,
  author = {Franco Moretti},
  title = {"Operationalizing" or the Function of Measurement in Modern Literary Theory},
  school = {Stanford University (Literary Lab)},
  year = {2013},
  pages = {1--15}
}
Moretti F (2013), "Distant Reading" New York Verso.
Abstract: How does a literary historian end up thinking in terms of z-scores, principal component analysis, and clustering coefficients? The essays in Distant Reading led to a new and often contested paradigm of literary analysis. In presenting them here Franco Moretti reconstructs his intellectual trajectory, the theoretical influences over his work, and explores the polemics that have often developed around his positions.
From the evolutionary model of "Modern European Literature, " through the geo-cultural insights of "Conjectures of World Literature" and "Planet Hollywood, " to the quantitative findings of "Style, inc." and the abstract patterns of "Network Theory, Plot Analysis, " the book follows two decades of conceptual development, organizing them around the metaphor of "distant reading, " that has come to define-well beyond the wildest expectations of its author-a growing field of unorthodox literary studies.
BibTeX:
@book{Moretti2013DistantReading,
  author = {Franco Moretti},
  title = {Distant Reading},
  publisher = {Verso},
  year = {2013}
}
Khadem A (2012), "Annexing the unread: a close reading of “distant reading”", Neohelicon., July, 2012. Vol. 39(2), pp. 409-421. Springer Science and Business Media LLC.
Abstract: In contemporary debates about World literature, Franco Moretti’s method of enquiry called ‘‘distant reading’’ has attracted considerable attention. Many have hailed it as a genuine method, and many have criticized different aspects of it. This essay tries to provide a close analysis of distant reading, and points out a number of misconceptions in it. Starting by an overview of the current discussions regarding Moretti’s method, the essay makes a detailed scrutiny of some of its practical examples. After illustrating the main problem of his method, i.e. not differentiating between two different kinds of noncanonical literature, few methodological suggestions will be offered to help distant reading avoid the current problematic condition.
BibTeX:
@article{Khadem2012Annexingunreadclose,
  author = {Khadem, Amir},
  title = {Annexing the unread: a close reading of “distant reading”},
  journal = {Neohelicon},
  publisher = {Springer Science and Business Media LLC},
  year = {2012},
  volume = {39},
  number = {2},
  pages = {409--421},
  doi = {10.1007/s11059-012-0152-y}
}
Rockwell G and Sinclair S (2012), "Teaching computer-assisted text analysis: Approaches to learning new methodologies" , pp. 1-20. University of Alberta Libraries.
Abstract: Using a computer to analyze a text intimidates many humanities students, but the reality is that text analysis is becoming a fundamental and naturalized part of how we operate in a digital society. Text analysis is what enables Google to compile and index tens of billions of web pages so that our search terms produce results; it is fundamental to building IBM’s Watson, a computer-system that was able to beat two of the top human Jeopardy! players of all time; it allows smartphone developers to build predictive texting capabilities; it also enables a humanist to study the relationship between Agatha Christie’s dementia and the richness of her vocabulary over the course of her writing career. Significant transformations of how we handle the written record are occurring as more and more of it is digitized and made available for computer analysis. Analytics are no longer an exotic preoccupation of digital humanists and computational linguists: humanities students need to understand automated methods if only because we are surrounded by their use—in everything from our email to the news. This chapter will therefore: Briefly describe what text analysis is; Make the case that analytics should be taught; Discuss how it can be integrated into humanities courses; Discuss recipes as a way of introducing students to text analysis; and Introduce the idea of notebooks for advanced students. Our goal is to start by making the case for teaching text analysis, then to provide ideas as to how it might be taught, and to end with reflections on advanced support in the form of notebooks—where the analysis becomes a form of research writing.
BibTeX:
@article{Rockwell2012Teachingcomputerassisted,
  author = {Rockwell, Geoffrey and Sinclair, Stéfan},
  title = {Teaching computer-assisted text analysis: Approaches to learning new methodologies},
  publisher = {University of Alberta Libraries},
  year = {2012},
  pages = {1--20},
  url = {https://era.library.ualberta.ca/items/d389903e-3c4a-45a0-9548-f5becc31386f},
  doi = {10.7939/R3C53FF1T}
}
Moretti F (2007), "Graphs, maps, trees" London Verso.
Abstract: The title of this short book deserves a few words of explanation. To begin with, this is an essay on literary history: literature, the old territory (more or less), unlike the drift towards other discourses so typical of recent years. But within that old territory, a new object of study: instead of concrete, individual works, a trio of artificial constructs graphs, maps, and trees-in which the reality of the text undergoes a process of deliberate reduction and abstraction. 'Distant reading', I have once called this type of approach;' where distance is however not an obstacle, but a specific form of knowledge: fewer elements, hence a sharper sense of their overall interconnection. Shapes, relations, structures. Forms. Models.

From texts to models, then; and models drawn from three disciplines with which literary studies have had little or no interaction: graphs from quantitative history, maps from geography, and trees from evolutionary theory. The distant reason for these choices lies in my Marxist formation, which was profoundly influenced by Galvano Della Volpe, and entailed therefore (in principle, if not always in practice) a great respect for the scientific spirit. And so, while recent literary theory was turning for inspiration towards French and German metaphysics, I kept thinking that there was actually much more to be learned from the natural and the social sciences. This book is a result of that conviction, and also, in its small way, an attempt to open a new front of discussion.

Finally, these three models are indeed, as the subtitle intimates, abstract. But their consequences are on the other hand extremely concrete: graphs, maps, and trees place the literary field literally in front of our eyes-and show us how little we still know about it. It is a double lesson, of humility and euphoria at the same time: humility for what literary history has accomplished so far (not enough), and euphoria for what still remains to be done (a lot). Here, the methodology of the book reveals its pragmatic ambition: for me, abstraction is not an end in itself, but a way to widen the domain of the literary historian, and enrich its internal problematic. How this may be done, is what I will try to explain.
BibTeX:
@book{Moretti2007Graphsmapstrees,
  author = {Moretti, Franco},
  title = {Graphs, maps, trees},
  publisher = {Verso},
  year = {2007},
  edition = {Paperback edition}
}