Lecture 12d | MIT 21M.380 Music and Technology (Contemporary History and Aesthetics), Fall 2009


The following
content is provided under a Creative
Commons license. Your support will help MIT
OpenCourseWare continue to offer high quality,
educational resources for free. To make a donation, or to
view additional materials from hundreds of
MIT courses, “visit MIT OpenCourseWare
at ocw.mit.edu.” PROFESSOR: So today we are going
to spend a little bit of time discussing the Bimber
reading, which introduces a really important
concept for us. That is the concept of
technological determinism. I think this is particularly
important in our look at music and technology because
music technologies, I think, raise really
interesting questions, and pose really
interesting examples and counter examples
of this question of technological determinism. So we’ll spend a little
time talking about that. Then we’re going
to experiment with some electromagnetic
properties of sound generation. So we’re going to generate
a mini Telharmonium light, as Collins likes to say,
using a simple motor. We’re going to experiment
with humbucker pickup, and see what kind of sounds
we can pick up with that. And then we’re going to build
a little oscillator using the Schmitt trigger model, that
Collins nicely provides for us. Good? Questions? In general, keep in mind
we will have another quiz next Thursday, a week from
today, so keep that in mind. OK, good. So let’s start with the Bimber
article, and I believe Jillian, you were responsible
for this one. AUDIENCE: OK. So, basically this article
is talking about, as he said, technological determinism. Which he kind of describes
as the views on relationships between technology
and human activity, or the significance
of technology to this social change. But he kind of talks
about how there’s a huge debate on what the
concept actually means. So he goes through
the other three types of technological
determinism, that would be norm-based
accounts, logical sequence accounts, and unintended
consequence accounts. He kind of describes those, and
goes through different views on each of those. PROFESSOR: OK, good. And the big picture– his
goal– his big question for this article is what? AUDIENCE: Basically,
how technology relates to social aspects. Or, does it really? PROFESSOR: OK. So how technology relates
to culture and society– what action– what
power technology has in relation to
society and culture. And specifically this article–
on the biggest picture, he’s asking a question that
doesn’t really concern us. But what’s the big
question he’s trying to answer with this article? AUDIENCE: Did what Marx say–
was it really technological– PROFESSOR: Was Marx a
technological determinist? That’s sort of the big
question, but that’s the least interesting for us. People debate that a lot, and
that’s an interesting issue, but that’s not really
relevant for us. This article, for
us, provides us with a really nice way of just
getting an introduction to some of the different approaches to
what technological determinism is. So let’s first discuss
a little bit about what is technological determinism,
without looking at his three cases, let’s just talk
about this in general. What is this idea of
technological determinism? AUDIENCE: Which one of them? PROFESSOR: Just in general. Try from the highest, most
broadest description first. AUDIENCE: So you have this
aspect of, does technology affect us? And how– in that case,
how much does it affect us? There’s also the deterministic
part , where it’s like, is what’s going to happen
in technology really deterministic? And so that’s at
times, do we affect it? Do we actually have control
over that technology? PROFESSOR: Right. I think in the
biggest sense, that’s the question we’re
trying to answer here. Is technology an independent
force that acts on its own, or do we have some control? I think, in the
biggest sense, that’s the question this gets to. What is this concept
of determinism? In general. Determinism? AUDIENCE: Do you have free will? PROFESSOR: Yeah,
it basically gets to the question of free will. Determinism– I mean, how
could we not have free will? Well, what are
you talking about? I make choices. AUDIENCE: It’s to say
that possibly the course of human history, or
any event that happens, is all predetermined
by the choices that we made in the past. So, it’s hard to say whether or
not your free will is actually your decision, or if
that’s just a conclusion of some other
decision that’s made. PROFESSOR: Right. But then that case,
then you still have decisions playing a part. That might be a sort
of soft determinism. A hard determinism, taken
further would be what? AUDIENCE: You have no choice. PROFESSOR: Right. And why not? Why don’t you have any choices? AUDIENCE: Everything’s just a
bunch of chemical reactions, right? It’s just all physics. PROFESSOR: It’s all physics. It’s all chemical reactions. It’s all causal change,
causality and causal change. One reaction, one physical law
interacts, results in another. And that chain of causal
actions, leads in a result. That’s the idea of determinism. And if that is taken
to its logical extreme, then we don’t have any
room for free will. And that’s a debate we’ll
let the philosophers discuss, it doesn’t really bother us. Well it does. Sometimes it bothers me, but
I feel like I have free will, and you probably
all do too, so we can work with that
assumption just because it’s a little more practical. AUDIENCE: So then
the other major in technological determinism
is not determinism in general, but does technology
determine social things? Does it cause social
things to happen? So does the fact that
the typewriter came about at a certain time,
make certain things happen because of
the typewriter? Not the other way around. Not, did the
typewriter get invented because of social things? PROFESSOR: Yeah. I mean, the idea that
technologies on their own, just by existing, play a
part in the causal chain, independent of human
actions, that’s really at the core of the issue. We’ll talk about that more, but
let’s look at Bimber’s three– he categorizes three groups
of approaches, accounts of technological determinism. He’s trying to narrow the field,
because so many people have talked about this. He’s trying to narrow the field
down, and sort of categorize the ideas about this. Let’s start with the
norm-based account. What’s the norm-based account? What are the norms that
he’s talking about? AUDIENCE: So basically,
that technology is like a human activity. People create technology. They get together
and make things, then their actions are
governed by whatever political body is ruling. And also he talks about
the fear of productivity over ethical things. PROFESSOR: That’s
the key– I think that’s the key point
in norm-based accounts. Somebody else want to
build on that idea? Norm-based? So the norms he’s
talking about– he’s basically talking
about where people give up some control
to technology, right, giving up some
control to technology. And he’s talking about
that based on these norms. And what are the norms? You mentioned one of the
norms that he mentions, which was– which
one did you say? AUDIENCE: Productivity. PROFESSOR: Productivity. Productivity, that’s right. What are some of the other
norms that he describes? AUDIENCE: Logic. PROFESSOR: Logic. Logic. And often with logic is reason. Logic, reason, productivity,
efficiency, those are the norms that he’s talking
in the norms-based account. And what he’s
talking about– how this is a sort of technological
determinism is that people say, well, I don’t know if
I like this technology, but it makes my life easier. It’s more efficient, or
it’s a reasonable approach. That’s sort of a giving up
of some control, but not all control. Ultimately, does he say that
the norm-based account is a technological determinism? No. He rejects it because
humans still play some role, but don’t have complete control. OK, the second one. Let’s go to– let’s jump
to unintended consequences. What are the unintended
consequences? AUDIENCE: Basically,
you have a technology, and you’re creating
it for some purpose. And then, way down the road, it
has some completely different affect that you
couldn’t have predicted. PROFESSOR: Good. Now how is that a
technological– suggests perhaps a technological
determinism? AUDIENCE: For instance, he
brought up the example cars. They were created to
clean up the streets so there wouldn’t be
horses everywhere. And then it kind of led to
this whole Co2 emissions, and that’s kind of a
technological determinism, because we couldn’t have
prevented that because we couldn’t have predicted
it in the first place. PROFESSOR: Yeah,
unintended consequences. Technologies have results
that we may not foresee. And that gives us a sign
that the technology– it suggests perhaps,
that the technology has some sort of agency,
that technology is doing something
out of our control. That’s another
reason why it might be seen as a
technological determinism. Does Bimber ultimately
say that it’s a technological determinism? No, he cuts that one out too. And why? AUDIENCE: He makes
the distinction that because it’s– just because it’s
uncontrollable and unintended, doesn’t mean that
it’s deterministic. PROFESSOR: Right, that’s true. And also that humans
still have agency rights. Humans still played a role in
creating those technologies, introducing them, so
it doesn’t completely remove the human agency. Finally, we get to the
logical sequence account, which is the one that
he wants to hold up as the real, pure
technological determinism. And what is the logical
sequence account? AUDIENCE: The technology itself,
creates the social change. PROFESSOR: Technology
creates social change, that’s one aspect of the
logical sequence account. What are some other aspects of
the logical sequence account? AUDIENCE: That the existing
technologies will always entirely determine what next
technologies [INAUDIBLE] emerge. PROFESSOR: Right, and that
there’s a logical sequence. Both it’s logical,
one leads to another, and that there’s a sequence. Anybody want to flesh out
that idea a little bit more? With this logical
sequence, can we think of an example of how
somebody might interpret this logical sequence
in our world, or in some sort of
practical context? AUDIENCE: Steam
before combustion. PROFESSOR: Yeah, like you have
to go through the steam engine, before we go through
combustion engine, before we move through
different technologies. AUDIENCE: Bronze Age. PROFESSOR: Right. We see this all the time
in simulation games, like Sim World or
something, where you’re building a little
artificial culture, and you have to go
through the Iron Age before you do this
and that, and that’s an idea of a logical
sequence of technology. Does it have to be like that? Do you agree? Is that– AUDIENCE: I was
just going to say that it was regardless
of where it was and who– PROFESSOR: That’s right. The logical sequence
happens regardless of geography, regardless of
background, ethnicity, climate, cultural context,
economics, politics. AUDIENCE: Personally,
I disagree. I may be biased because we just
read the [INAUDIBLE] article. But basically this is
making everything linear, which, as we found out,
isn’t really the best model. PROFESSOR: Right, so it does
suggest a linear trajectory. That’s absolutely right. I think that’s a good flaw. What else makes you wonder
about this logical sequence? AUDIENCE: It’s very
Western Eurocentric. It’s about our
sequence that we’ve gone through
[INTERPOSING VOICES] sequences happen elsewhere. So the idea that you need to
go through certain steps– and like now with
globalization you see in Africa they
use cell phones, but they don’t use
landline phones. PROFESSOR: That’s
a great example. AUDIENCE: It’s not
necessarily, basically leapfrogging all these things. PROFESSOR: That’s true. We have great examples of that. And of course, the idea
that the logical sequence is our sequence is
pretty, I think, true. AUDIENCE: [INAUDIBLE] logical,
because once you look back and you see which
steps have occurred, it’s hard to imagine another
set of steps occurring. I [INAUDIBLE] with the
deterministic quality of the universe, because
we assume that everything– if you assume determinism
in the universe, then obviously technological
determinism holds. PROFESSOR: Right. But of course we
have people that are embracing
technological determinism, without really being
philosophical determinists. Great, so that’s Bimber. And Bimber sets
this out, and that’s the really important
thing for us. Now that we have a good idea
of technological determinism, he doesn’t want to say
these other things are technological
determinism, but I think we can think of them as
technological determinism light. They’re there, they happen. So the reason why I find
this concept so interesting and relevant to us is because
we find aspects of technological determinism all around
us in our lives, . in commentary that people offer,
both in the media and in casual discourse. And I’m wondering
if you guys can think of some casual ways
in which people reinforce or suggest technological
determinism? Casual ways people
talk about technology– AUDIENCE: Just our
idea of the future. We’ve predetermined
what future is supposed to look like in science fiction. And so now if someone wants
to design a product that looks futuristic, then they have
a model to work with already. Even though it
hasn’t happened yet. PROFESSOR: Interesting. Interesting. Are people making
choices there though? Or are they– AUDIENCE: I guess we’ve
made a choice that’s supposed to drive some type of
aesthetics of our development. These are the products that
people develop and have [INAUDIBLE]. PROFESSOR: I think that’s– I
think there’s two layers there. One is not technological
determinism, that it’s sort of the
result of creativity, of fancy, that
imagines that future. That’s one part. The part that’s technological
deterministic about that is the idea that, that
future looks a certain way and is often better, or good,
or more efficient, or more Star Trek like. AUDIENCE: Do you think that you
can consider a social aesthetic that’s [INAUDIBLE] your norm,
or would that maybe not qualify? PROFESSOR: What do you
mean a social aesthetic? AUDIENCE: Like he was saying,
we had this predetermined way, just because of how popular
the idea of the future was, [INAUDIBLE] maybe [INAUDIBLE]
the stereotypical Star Trek idea. And then you have an idea of
what a spaceship looks like, even though it does not exist. Can you call that–
you can define that as a social aesthetic? Can you call that a norm
for certain [INAUDIBLE] PROFESSOR: Yeah,
that’s interesting. I think– AUDIENCE: File that under
norm-based accounts? PROFESSOR: Yeah,
that’s interesting. I think it is a norm. I think it is– there’s some
freedom there though, right? Because we can
imagine other things. So I think science
fiction offers some freedom and
actually some ways out of a technological
deterministic approach. Let me give you an example. One way in which I think that we
see technological determinism, is whenever people give
agency to technologies. So remember when we are
reading the Stern article, and he kept on saying
the MP3 does this. The MP3 has changed
the way that we listen to– That sentence,
in and of itself, I think is problematic,
because it implies that the MP3 is doing something. Is the MP3 doing something? Do you think it does something? Gets up, has a nice day,
forces you to listen to music at a lower cost and
a lower quality? AUDIENCE: Technically
we were talking about how it’s not
the MP3, but it’s the person who developed the
MP3 coding that does [INAUDIBLE] PROFESSOR: Right. Either the developer or
the user are real agents. But this casual use of language,
this attribution of agency to technologies, I think
is a significant point of how these technological
deterministic ideas creep into our thinking
about technology. Can you think of any others? AUDIENCE: I was going to
say there is something to be said though for the idea
that these technologies embody, in certain ways,
the ideas of people. So, the people who made
the MP3, had certain ideas about how people
hear and listen, and so that sort of
built into the MP3. PROFESSOR: Absolutely. AUDIENCE: So in a sense, we
can say the MP3 does do stuff. You play it and it plays music
back, and it has properties. So in a certain sense, those
properties can shape things. There isn’t agency
necessarily of the MP3, but they are reflective of
human agency cycled back. PROFESSOR: That’s right. AUDIENCE: It’s a bunch of math. PROFESSOR: Yeah, and it’s sort
of this casual use of language that sort of overlooks the
humans behind the process. And I think that it’s not– that
that’s important to recognize, that that process is happening. Can you think of other examples
of this, where we casually sort of give technology a power
that it may not really have? AUDIENCE: The nuclear
bomb overturned ordinary in the 20th century. It’s completely different now. All political
science has changed, international relations. And so I don’t know if you give
the agency to the bomb itself, or to it’s inventors. But it changed a lot. PROFESSOR: That’s
true, that’s true. And it– very casually, people
say things like the Nuclear Age transformed geopolitics. But again, the question I
raise is, well, was it that? Or was it the people
behind it, and the culture, that led to that? And is it significant
in our language if we make a distinction? AUDIENCE: At the
same time, I don’t know if any of the people who
developed it had a choice. Because at the time,
there was a competition. And so you could look at that
as a deterministic factor also, that once
some spark goes off and there’s this idea
of the nuclear bomb, the superpowers have no choice
but to develop it, and have no choice but to go
into the Nuclear Age. PROFESSOR: That’s
one of the arguments for technological determinism. Yeah, that’s right AUDIENCE: Once you’re
in the Nuclear Age, you can’t leave it. PROFESSOR: That some
technologies sort of push you like this. Some would say, and
I would probably say, that we still have control. That we can still make a choice,
and getting to that position that we were in,
was still a choice. I mean now we’re
trying for– we’re attempting to do
nuclear disarmament. And that’s a choice to take that
away, whether that can work– AUDIENCE: But what
you can’t take away, is the knowledge of
how to build bomb. So even if everyone
disarms, then should things get tense
again, will people start building again? You really can never go back. PROFESSOR: Yeah that’s
an argument for– AUDIENCE: [INAUDIBLE] I
think it’s optimistic, in a sense, that some
people theorize that you can in fact destroy knowledge
by rewriting history books [INAUDIBLE] 1,000
years [INAUDIBLE]. PROFESSOR: Yeah. Well, and there’s
contemporary examples of countries doing that in
explicit and inexplicit ways. AUDIENCE: [INAUDIBLE]
any knowledge globally? PROFESSOR: It’s hard. AUDIENCE: [INAUDIBLE]
it’s hard to prove. PROFESSOR: Yeah. So I think again another sort
of telltale sign for this is when people say– talk
about technology evolving. That’s another use of language
that raises a question for me, because technology
itself in my view, I don’t think actually evolves. Humans change their
uses of technology, and technology changes. But to say that
technology itself evolves, I think this is a
problematic thing. Another common way is this idea
that the technologies we have are the best. That’s a very common idea. People think oh,
the technologies we have are the best. Are they the best? We don’t know. AUDIENCE: His whole umbrella,
technological determinism, and technology evolves in this
direction and the current one is the best. I think people
spend a lot of money to make us all believe
that particular idea. Particularly some
technological tycoons out in Seattle, and
maybe in Silicon Valley. There’s a couple of
guys I have in mind who thought it was certainly
in their best interest to make you think that in
order to get best access the internet, you need this best
technology browser, which works with this best operating
system and best platform, and it goes in this way. And thankfully, we make it. PROFESSOR: Exactly. And those ideas, I think seep
into the culture and seep into the discourse. And people who don’t engage in
technology, tend to think that. And they have a
sort of resignation that they don’t play an
active role in technology. That technology is just there. OK, well they buy the
product, or they’re sort of passive consumers
of technology, and not active agents in technology. And I think this idea of
technological determinism helps us remind ourselves
that we are active agents. Well, it may be a little
easier for you guys. But it’s important
that everyone realize, even those unskilled
and untrained are active agents in shaping
what technologies happen, and what technologies
move forward. A lot of people don’t
see that all the time. OK, so the last
case now, in terms of music technology–
in our look so far at music technologies,
do we have examples? Does our look at
music technology support this idea of
technological determinism, or does it counter it? Can we think of some
things we’ve seen so far? AUDIENCE: Could the development
of a certain type of instrument follow a logical
sequence approach? Starting at the lute,
moving all the way down to the electric guitar? PROFESSOR: A determinist
might say, yes, the electric guitar
is the logical outcome of the [? Ude ?]. A determinist might say that. Do you think that’s true? AUDIENCE: I don’t know. I feel like that
probably wouldn’t be the case, because
it’s either looking at it like the development and
evolution of the instrument affected the course
of music history, or it’s the other way around,
that music history affected the evolution, if you want to
call it, of the instrument. So I have to give more
credit to [INAUDIBLE] PROFESSOR: Me too. AUDIENCE: Quite a bit
of music technology follows competing power,
which is determined by literally a law,
Moore’s law, so– PROFESSOR: It’s not
determined by Moore’s law. Moore’s law is empirical,
based on an estimation of what happens. It’s not determined– AUDIENCE: It is a law, though. PROFESSOR: I don’t
think it’s a law. AUDIENCE: If you’re not
coming out with a better chip every 18 months, you’re
breaking the law. It’s pretty important that you– PROFESSOR: Is it a law? I don’t think– AUDIENCE: You get booted out of
Silicon Valley, if you don’t. It’s a real thing. PROFESSOR: It’s a
economic practice. So music techno– go ahead. AUDIENCE: [INAUDIBLE]
seems very confused that they have no
way of measuring how successful a theory is. I feel like what we
need to be looking at is their predictive power. So if we have a theory that
says technologies are going to determine what
happens next, then we should be able to kind of put
these building blocks together from where we are now. And say, OK, ten years
down the line, here’s exactly what’s
going to be there. And then we can measure that. This hasn’t happened. These theories have been
around for a long time. So I feel like the
ones that claim that things are deterministic,
that they should really be able to back that up. And they haven’t. PROFESSOR: I agree. And that makes it
very clear and simple. But what I find
really interesting is that these ideas of
technological determinism sort of seep into our culture,
and seep into our discourse. Even though, as you
point out there, there’s some ways where, very
clearly, it could be tested, and could be shown
to not always follow. And so in many cases– AUDIENCE: They do
say– he mentioned Robert Heilbroner in there. And Heilbroner has this thing
about technological determinism that one of the reasons that you
might [INAUDIBLE] or anything, is that people have
predicted historically technological developments. [INTERPOSING VOICES] which
is sort of a crazy idea, I think, because it’s not like
they printed a lot of things. And so it’s sort of
hard, that’s [INAUDIBLE]. PROFESSOR: Nostradamus is
right three out of 1,000 times. AUDIENCE: People say there’s
going to be flying skateboards. There’s all these things
that people predict, and [INAUDIBLE]. PROFESSOR: Right. Well, there is the issue of a
sort of simultaneous discovery, where people separated in
different regions come up with the same inventions
at the same time. There’s many examples
of that, which is kind of an interesting case. But again, to
music technologies. I think music technologies
pose an interesting challenge to this idea of
technological determinism because, one, we see the role of
aesthetics and cultural factors making the choices that
determine what technologies are successful and
how they are used. Nobody intended the disk
based player of a gramophone to be used as a
musical instrument for altering the playback speed. Nobody intended the
components of a radio to be hacked together
to build a synthesizer. Nobody intended
countless other examples of repurposing– that
was one of our articles, talked about the idea of
repurposing technologies. We see many examples of
that in music technology. We were talking
about the attraction to 8-bit technologies. We see people going back
to older technologies, inferior technologies,
for aesthetic reasons, not for a technological
logical sequence. And I think those give us
some interesting examples of deviating from the
deterministic approach. AUDIENCE: Yeah, I
absolutely agree. And especially– you
were saying that in order to subscribe to the
deterministic view, you need to have some
sort of predictive power. It enumerates that
you’re actually committing some kind of
fallacy that’s really common. Which is to say, because it
did happen in this direction, you forget that there were
other possible directions that it could have
gone into, that were dependent on
a bunch of factors you didn’t even think about. I wonder how dependent
rock and roll, and all of the technologies
that go along with it, be it distortion pedals or
different kinds of guitars and pickups, is dependant on
World War II, and the baby boomer generation happening, and
the rebellious kids and stuff. PROFESSOR: You mean
just independent of the technologies? AUDIENCE: If World
War II never happened, we’d never have
distortion pedals, right? It’s probably true. PROFESSOR: Well, the Scott model
talks about an interconnected web. And I don’t like to
give agency to machines, that’s just my personal opinion. If you guys want to give agency
to the machines in your lives, that’s good for you. AUDIENCE: [INAUDIBLE]
music though? PROFESSOR: No, not at all. In fact, I’ve argued
very strenuously for the opposite
of that, and that’s a case where it becomes
really, really interesting. But this is, I think,
a useful thing for us to think about as
we move forward. Other comments? OK, let’s make some noise.

Leave a Reply

Your email address will not be published. Required fields are marked *