Data and DH: critical reading of Miriam Posner's article What’s Next: The Radical, Unrealized Potential of Digital Humanities, 2015
1st of October 2018Digitization. What for?
15th of October 2018Who's Afraid of Edmond Belamy?
28th of October 2018Data and DH: critical reading of Miriam Posner's article What’s Next: The Radical, Unrealized Potential of Digital Humanities, 2015
1st of October 2018
What’s Next:
The Radical, Unrealized Potential of Digital Humanitie raises a very relevant question which I’d dare to resume as
being this, of use of categories within data analysis in the digital humanities’ field.
But, paying more attention to reading, we can distinguish two parallel, yet different, problems: the first one, about
“contamination” of data’s collection and consideration by common bias (such about gender, race, etc). The second is
visualization as consequence of biased analysis.
The former one is the rhetorical axis of article: the author indicates, very apropos, that the categories we manipulate and we convey
to algorithms are profoundly impacted by paradigms about race, gender etc. in which the author sees the hand
of “power”. We understand then that DH researchers are viewed as having the capacity to stay away from influence
of power and their role is that of watchdog for equal treatment in the field of big data related to human (so
to say, to pretty all data). We can suppose, reading the article that the position of DH researchers is a
particular one as standing outside of “business” (indeed, author blames all data analysis tools as being issued
from and for business interests). The flame of advocacy of the author deserves respect. Moreover, the problem of
“contamination” of AI by our human bias is raised more often, as more significant responsibilities (going up to
court’s decisions and medical diagnosis) we entrust to AI. But in the world of relativeness there is no “mere
place”: we’re all involved in a paradigm’s lifespan and what we believe is good and progressive from our point
of view is not necessarly so from another’s. And the question is not to abandon the struggle for “objectivity”
of data categorisation, neither for more equal world and science about it. The question is, at least in the
field of human culture, to appreciate the limits of the system of categorisation and to include the knowledge about
those limits into the global picture.
One of the latest articles of the founder of Cultural analytics’ approach, Lev Manovich
discussing
the question of categories at their usefulness,evokes the pathway taken by Bourdieu in his Distinction (1979),
based on correspondence analysis, developed a decade before by Herman Otto Hartley. Data analysis, in the
broader field of humanities, is not an aim in itself: it is exploratory analysis which is not supposed to generate
numbers and formulas about human existence and culture.The exploratory analysis in the field of DH needs to focus
on four main characteristics, underlined by Manovich: diversity, structure (networks and other types of
relations), dynamics and variability. Thus, the tendancy of reduction to categories (gender,
if we take the example of Miriam Posner) must be a part of explorable field, not the departure of “sanction”
about the wrong or right way to categorise. We need to rely on all the clusters to get an overview effect (and
thus, step to a new stage of understanding of "what's going on"), not a judgment. The final Posner reflexion involving a film
sequence is, I believe, sourced from this intuition (still it is not obvious to link directly a film creation and
data analysis).
From another side, the mention about “contaminated” data tools is worth discussion as it leads to scary
evidence that the closer we are to the source of algorithms, the more we can, as humanists,
be present at the root of digitalisation and digital analysis of big data, the more likely we can pretend on the
role of “objectivity' regulators”. And here is, in my opinion, the open question about real competencies of DH
(researcher). If DH is a field of humanists (I leave the sens of this term to float between “continental”
and “anglo-saxon” understandings) who only know about already developed (for and by business) digital
tools, their efforts to dum up a proliferation of biased vision into big data analysis risks always finish
in rhetoric flames. That is: we do need the knowledge of digital, not about digital.
The ethic of hackers (let’s call them “good” hackers, and I hope there is no confusion
about this, somewhat blurry, defintion) is probably the same Myriam Posner tends to. With a difference that (good)
hackers could have a real impact on the control of our digital world (forgive me, please, my optimism) and a (good)
DH research, ignoring programming, stays an arbiter with no real impact on the game. It is the reason why, I believe, the question,
intuitively appointed in Posner’s article which is relevant up to now, is those, of the balance between “what we
think” and “what we can”.
Another (last) problematic area explored in article, related to data viz, is much more tangible. The affirmationthat
pretty all data visualizations “look terrifyingly authoritative” sounds little bit bombastic. While we understand
the aim of criticism, what would be the “right solution” according to author? Surprisingly, only one data viz is
offered as “good” exemple (and those, which seems to be really interesting, Evan Bissell and Erik Loyer's one,
The Knotted Line is inaccessible), and it is difficult to agree that the Data-izing the images: Process
and Prototype by David Kim is persuasive as we struggle even to read it and “overview effect” is just fluffed.
I agree with affirmation that there is a kind of cul de sac with “traditional” tools of data viz. But today
(3 years after the discussed article was published) some things happened. Diverse programming libraries (like D3.js)
are more and more used for create the visions of
an art work based on a data.
One of brightest examples, an approach developed by Giorgia Lupi along with her breathtaking data visualizations, is situated
in this growing tendancy. This Italian informational designer working currently in New-York, advocates for
Data Humanism speaks about “second
wave” of data visualization which will be “about personalisation".Giorgia Lupi’s work and reflection
create another pathway to a new cognitive paradigm which separates “understanding” from “interpretation”
(using the terms of Susan Sontag in her “Against Intrepretation”). The new, human approach to data is
indissociable from understanding by doing, from sharing this “doing” in what we call interactivity. And it is where I, personnally, see a full realisation of both of “Digital” and “Humanities”.
top of the page
Digitization. What for?
15th of October 2018
I'd like to bring to discussion the article of Lev Manovich Cultural Data: Possibilities and Limitations of Digitized Archives
published last year.
Lev Manovich,the founder of Cultural Analytics approach describes, in this article published last year,
through his researcher's experience, the state of digitizing of cultural data, from its beginning with
Gutenberg project in 1970' up to the largest projects like Europeana which counted, in 2016, over 53 milion
“artworks, artefacts, books, videos, and sounds from across Europe.”
Manovich explains that, while the process of digitizing and dissemination of digitized artifact tends to
cover the entire heritage of humanity,those large collections (as mentionned below Europeana) still seem as a
“patchwork” of very heterogenic elements with no standardised size, extensions and classifications.
While obviously challenging for the use of this data, especially in the domain of data analytics and
machine learning, this challenge is must be seen as opportunity:
“It forces us to face the human visual culture as it really exists historically – thousands of
variations and their combinations, rather a net set of a small number of categories.”
Author also notices that while the process of digitalisation expands in geometrical progression,
“the most basic question for any quantiative study of cultural history remains unaddressed”.
This question is, for him, that, of representative sampling, this term being understood in the sense
of statistical sciences, as a smaller subset of the larger data. Manovich says that systematic approach
to data is largely adopted by natural and social sciences but not by humanities in which the research
“is still driven most by ideologies, rather than a balanced samples”. I will not resume here the
argumentation of author and his very interesting interpretation of a statistical approach for humanities:
the article is quite short and worth of reading.
I would like to raise another contingent question.
We keep disscussing, on our Master's core modules, Digital Humanities as gathering, digitizing and dissimenating
of data. The question is: what for? “To do it accessible” would not be an answer, or be a very incomplete one.
Because answering so, we stick with a paradigm of quantity or, on a stage “in between” where “available”
calls a question: what for? So, what for, beside the joy of heritage officers and researchers themselfs,
the cultural data exists and continues to grow?
As Manovich says in introduction, digitization of cultural heritage over last two decades opened very
interesting possibilities. While the conversion of humans' artifacts in digital format is an endless
and obviously necessary process, the consideration of next steps becomes essential.
I will not flood this post with the examples of the “next step” done in last years by hundreds DH researchers
over the world. We don't need to go too far to get such an example: the presentation of his research by
Patrick Egan (musician, musicologue, web-developer) on a recent DH colloquium was a very bright one.
We need to admit that without get the hands dirty in such “ennoying” things like data mining, analysis,
programming languages and so on, we do “as if”. As if we were still somewhere in 2005 and a knowledge of
digital tools which allow gather and edit the data were a skyline. The reality is that thrir mastering starts
to be a requirement for secondary level's school. Moreover, those tools I call “facilitating” (as Zotero,
WordPress and similar) are not creative ones: they are done to comply to standards (of editing, publishing
etc), not to fertilize the research by decisively new possibilities offered today by data analysis and
visualization, machine learning and AI. Because there is a distinction, on my mind, between a researcher
in humanities who uses digital tools and digital humanities researcher.
top of the page
Who's Afraid of Edmond Belamy?
My contribution to the (somewhat absent form MA) digital art's topic
28th of October 2018
The news did a big tour in the medias, from Times to The Guardian: the Baron of Belamy, AI-generated artwork was sold,
on the 23rd of October, for the price of $432,000 at New York's Christies auction, far from initial estimation of $7000
– 10000. Difficult to name the author : this artwork, which is part of a series of many portraits (all of them sold since), is a result
of a project of the young Parisian start-up named “Obvious”, led by three young researchers/PhD
students/entrepreneurs: Pierre Fautrel, Hugo Caselles-Dupré and Gauthier Vernier. The Obvious exists since mid 2017
and represents this new wave of French start-ups flourishing on the very concentrated (in Paris, obviously) ground
of French high-tech “new normals “. None of them has an art background but the media already named them as “artists”.
Their credo is perfectly outlined in the title of a Medium article posted in June 2018:
"A naive yet educated perspective on Art and Artificial Intelligence".
They explain the way Generative Adversarial Networks (GANs), on the basis of series of “Belamy family's” portraits, act
(the “fake” family name is the tribute to GAN inventor,
Ian Goodfellow, which, translated in French, but with an english transcription, resulted in Bellamy). There is on one side a
generator, fed by data (15000 portraits form 15th up to 20th century) and generating new images, and on the other side a
discriminator, a kind of “art-critic” indicating the level of “truth” in those new “fake” portraits. The Generator thus
produces new images, more and more “true” regarding the initial data. Nothing really new especially when we learn that
even the principle of using GAN for generative art doesn't belong to “Obvious” but was developed by
Robbie Barrat
and presented in an open source mode just before the French start-uppers came across it. Moreover, honestly
speaking, the result (Belamy's portraits) is nothing but one of the raw AI creations and, from my personal
point of view,
much less appealing than what we can see in Robbie Barrat's GitHub account. I'm not sure if it's because
of the good marketing of
“made in France” brand, or because of troubling and vague similarity of Edmond Belamy's depiction with
some artistic style, still difficult to teas out,but the fact is that this "portrait" did break down
the very symbolic barrier of prestigious art auction.
While generative art is not a novelty, and neither are artificially generated artworks, we
undoubtedly are at a new stage of what we can see as an unavoidable process of machine/human “communion”.
Debates about the AI are omnipresent and hot. But it is precisely in art, one
of the first terrains of AI landing, that debates didn't seem to be so empassioned so far.
Aaron, a first program creating
artworks algorithmically is more than 40 years old. Processing, the open source
software which gave the (relatively easy) access to generative art practice to thousands,
was created back to 2001. Since, the community of artists, and number of programs within
generative art's very large boundaries continue to grow. And this resurgence of the debate
about computer-generated art on occasion of the first Christies sale (with relatively high
price) of “artificial” art work is interesting in itself, either by employed media rhetoric
as well as by this art-market's sensibility.
Indeed, we must wait for the “artificial” art work to be hung on the wall of Rockefeller center
to read such 19th century declarations as a "poor pastish of human genius" and “art as a way humans communicate
ideas”.
May be having a hunch of this reaction, in their article in Medium in last July, Obvious quote some
warnings, sounding very contemporary but written around 1850 like “this discovery will
eliminate the inferior layers of art”. It was about the new, at the time, photography. We now
know that photography did not provoke the death of art but became itself a widely used
art practice.
Some decades later, Dada appeared and it seems like it wasn't unanimous about the question of whether
collage made artifacts are within the art's boundaries or outside. What was Dada? One of the attempts
to oppose the random paradigm to established ideology of art and remove the creation from a strict control of human cognitive planning (which, in the meaning of Dada artists, fully compromised itself in the absurdity of the World War).
It is quite interesting that Jonathan Jones, the author of the article in The Guardian, while stating
that the art as “human consciousness expressing itself” resorts, among others emblematic
examples, to ...Duchamp with his urinoir. Duchamp, from which we know the famous :
“That's the onlookers who make the picture” and whose passion to make fun of and to fool all
rhetorical definition of art (such as “the way to communicate the ideas” according to Mr Jones)
was well known.
We can hang any definition on art, we will never tease out the thing which is, in essence, the way
to escape the rules of “normalized” thinking. It is the same about creativity, which is an inherent
component (or just another name?) of art. And we can observe, in art history, past and more recent,
numerous attempts to step outside of the zone of human control and leave to the (widely understood)
machine the part of creative process. It was and still is the same with photography or printing.
The novelty of Warhol's screen prints is based on the copy/past repetition and the effect of
uncontrolled accidents of the printing process with its blunt “charm”. Are we really scandalized by
his “I'd like to be a machine”, or just we can't figure out, until recently, that the “machine”
could be as scary and powerful as AI appears to us today and whose mission Jonathan Jones quotes
to be no less than the destruction of humanity?
Answering the question if the AI will tomorrow replace an artist, the Obvious team responds by the
question: “Did the (photo) camera become the artist?” Of course, the camera, even becoming digital
(kind of AI, in sum) is “encompassable” and “tamable”, while AI seems to escape to our understanding
and enters more and more in a competition with human. Soon, the last bulwark, defended with so
obstinacy – the emotion, will no longer resist, as the “machine” will, in couple of years, as
promised,
understand better our emotions than our (sometimes indifferent) relatives.
Difficult to predict the future but what is quite sure, that art remains, by any time, the
field of experimentations, not a secret garden where only highly conscious humans have a right
of entry. There is art in artificial (and Kunst in kunstlish,
iskusstvo in iskusstvennyj and sztuka in sztuchny) as unpleasant as it could sound to somebody who believes that an artwork
is nothing but a very authentic picture of its creator's counsiousness.
One more piece of evidence consists in the fact that there is no art without a medium and the meaning of this
word surpasses that of tool. However, once trusted to the medium to define, even partially,
the output of his creative input, the humain tries to break down on every significant change in the
complexity of this medium. Scared that the medium will escape him? Apparently, today, this fear is
stronger than ever, but who can say, whether more grounded in reality?
top of the page