Translate

Wednesday, 15 June 2016

“Grammar Mistake” or “Grammatical Mistake”: Which Expression Is Correct?

I  asked a version of this question on Quora, naively and mistakenly assuming that I would launch a groundswell  of support to stop people from using the expression “grammatical mistake.”  It seemed pretty obvious to me that something was either “grammatical” or a “mistake”; it couldn’t be both.  The word “grammar” is used as a noun modifier (actually every noun in the language can be used as a modifier), which we use for “grammar book,” "grammar teacher,” "grammar lesson,” so clearly the correct expression must be “grammar mistake.”  Imagine my surprise with the unanimous responses that there is nothing wrong with “grammatical mistake.”




I must admit that I was trying to be a bit too cute in how I formulated the Quora question:  “Isn’t the expression ‘grammatical mistake’ a grammar mistake?”  As a number of my respondents pointed out,   “grammatical mistake” isn’t a grammar mistake because it combines an adjective and a noun.  That’s how grammar works.  The expression may be semantic nonsense but that doesn’t mean it is an error in terms of grammar.

In truth, none of my correspondents would join with me in calling the expression nonsense, and would only go so far as to say that it might be taken as an oxymoron.  As Billy Kerr, patiently and clearly explained:

“‘grammatical’ has two distinct meanings.
Grammatical is an adjective: 1. relating to grammar. 2. well formed; in accordance with the rules of the grammar of a language
Mistake is a noun.
The adjective (in sense 1 - see above) modifies the noun. It’s perfectly grammatical (in sense 2) for an adjective to modify a noun, since that is the purpose of adjectives.
If sense 1 did not exist, it would not be ungrammatical, it would just be an oxymoron.”
Of course, "sense 1" does exist, so I can’t even save face by claiming that the expression is an oxymoron.  Could I claim it was ambiguous, a bit confusing?  Maybe, but not really.  When literate, native speakers of English unanimously claim that something is correct English, then it is correct English.  That’s how language works.
Still I was disturbed. Was it just that I didn’t like being wrong, especially about the English language?  Probably.  Why did I think “grammatical mistake” was a mistake?  Searching online I discovered this answer:
"The expression 'grammatical error' sounds, and is, in a sense, paradoxical, for the reason that a form can not be grammatical and erroneous at the same time. One would not say musical discord. . . . Because of the apparent contradiction of terms, the form grammatical error should be avoided and 'error in construction,' or 'error in English,' etc., be used in its stead. Of course one should never say, 'good grammar' or 'bad grammar.'"(J. T. Baker, Correct English, Mar. 1, 1901)
from http://grammar.about.com/od/fh/g/grammaticalerrorterm.htm
This discovery wasn’t all that reassuring since I found it on a web page called “grammatical errors” and it meant I was about 115 years out of date, and even Baker wasn’t willing to call “grammatical error” a mistake, just an expression to be avoided.  To add to my misgivings Baker’s example of “musical discord” was an expression I could imagine myself using.  Then there was my Quora correspondent  Bernard Glassman who acutely observed that the problem I was alleging would also have to apply to “hypothetical question” and “logical fallacy.”  Ouch.  I had never complained about “logical fallacy” but the expression suffered the same contradiction as “grammatical mistake.”

Reading (in fact, misreading) Edward Anderson, a third Quora respondent, I suddenly considered another possible meaning of “grammatical error.”  Could it mean that grammar was wrong?  Not anyone’s individual use of grammar was wrong, but that the rules of grammar themselves were wrong at some other level—in terms of semantics or logic or efficiency or clarity.

I have certainly sympathized with students who found it plainly stupid that “my brother is bigger than me” is ungrammatical and “he is bigger than I” is grammatically correct.  Traditional prescriptive grammar has created some fatuous notions like “split infinitives” and not ending a sentence with a preposition (on the grounds that you can’t do those things in Latin).  The most recent grammar controversy even has a name, the oxymoronic “singular their.”  Prescriptive grammar (pre-controversy) dictated that “Every student handed in his assignment on time” was correct grammar even if every student in the class was a woman.   This might be an example of a “grammatical mistake” but, of course, it’s not what people mean when they use this expression.

I haven't let go.  I need to pursue this conspiracy we call grammar and standard English further and deeper and wider.


In the interests of full disclosure, here are the responses of my Quora correspondents:


Billy Kerr, Native English speaker, from the UK.
127 Views

No, because “grammatical has two distinct meanings.
Grammatical is an adjective: 1. relating to grammar. 2. well formed; in accordance with the rules of the grammar of a language
Mistake is a noun.
The adjective (in sense 1 - see above) modifies the noun. It’s perfectly grammatical (in sense 2) for an adjective to modify a noun, since that is the purpose of adjectives.
If sense 1 did not exist, it would not be ungrammatical, it would just be an oxymoron.”

Bernard Glassman, Once a teacher of English, always, and annoyingly, a teacher of English.
103 Views

If "grammatical mistake" is itself an error in grammar, is calling something a "hypothetical question" equally erroneous, since it is, in fact, a question? What, then, is a logical fallacy? (This is getting to be way too much fun, but I would love to hear some other examples of those two, contradictory, meanings of “-ical.”)

Selena York, Business, Marketing, Finance, Insurance, Advertising, Consulting, Management,
8 Views

I always thought it was “grammatical error”. Either, or -

Kimberly Masterson, Editor, proofreader, writer in the United States
15 Views

Thanks for the A2A. Grammatical mistake is acceptable. My personal opinion is that grammatical error sounds better. Both are grammatically correct.

Edward Anderson, 7 years of Grammar School
29 Views

Interestingly, however, even if we stick by your chosen definition of #2, which is by far not the most commonly used one, the term “grammatical mistake” is still not a mistake in grammar. It is a syntactically well-formed phrase consisting of a noun and an adjective that modifies it. It is, at best, an oxymoron, like “jumbo shrimp,” “military intelligence,” or “president trump.”
In fact, there are entire classes of what you refer to grammatical mistakes, where the grammar is unassailable, yet still there is a mistake. We see them far more often in computer programs than in natural language. There’s the banana problem, where you run off the end of an array (so called as an homage to the grade-school child saying, “I know how to spell banana, but I don’t know when to stop.”) Then there’s the off-by-one error, where you store information in an array as if it’s zero-based, but retrieve it as if it’s one-based. The more formal term for these is not “grammatical error,” however; it’s semantic error.
You see, in English, “grammatical error” in common usage does not mean an error that is grammatical. It means an error in the grammar. And semantic error does not mean an error that is semantically well-formed; it means an error of semantics.

Billy Kerr 
Actually sense 1 existed first. “grammatical (adj.) 1520s, of or pertaining to grammar," from Middle French grammatical and directly from Late Latin grammaticalis "of a scholar," from grammaticus "pertaining to grammar".
So etymologically speaking, you have the timeline backwards.

Malathy GarewalNever learnt the grammar, but am a voracious reader and love the language.
95 Views • Malathy has 30+ answers in Grammar



Thanks for the A2A.
No, I do not think so.
I do understand the reason for the question, but I think here ‘grammatical’ is used as a qualifier for the kind of mistake made. Though I personally would prefer to say that something is grammatically wrong.
As for your reasoning of ‘grammatical’ versus ‘ungrammatical error’, think of substituting ‘typographical’ or ‘spelling’. While I can say something is a ‘typographical error and not a spelling mistake’, it would not be right to say ‘untypographical’. Hope that makes sense.

Sunday, 1 May 2016

This Professor Should Be Fired for Defending What I Believe In

I call it the “ad hominem dilemma.”  Just to remind you, an “ad hominem argument“ is a logical fallacy defined as trying to win an argument by attacking a person rather than the ideas that person is trying to present or represent in a debate.  The dilemma I have just coined occurs when you like an idea, but you don’t like the person presenting it, or you like a person but you don’t like the idea or argument.  In an ideal world the dilemma disappears because you always agree with the ideas of the people you like—though you might want to have your intellectual rigour checked.

So you might feel torn when you discover that Hitler liked apple pie, and you like apple pie, but you don’t want to be identified as one of those apple-pie-eating Nazis.  Like me, you might have wanted to tear out your hair when Wayne Gretsky announced he was supporting Stephen Harper in the last federal election—you remember, the election Gretsky couldn’t vote in because of Conservative policy preventing non-residents from voting.  Tells you what years in the California sun can do to an otherwise sane Canadian hockey player.  

Then there’s the Donald Trump (aka Drumpf) phenomenon.  You may have heard the claim that an infinite number of monkeys pounding on the keys of an infinite number of typewriters (i.e, keyboards without computers) would eventual type the complete works of Shakespeare.  Trump Drumpf gets so much media coverage, without ever spelling out the details of his proposals, that eventually he is bound to make some vague promise that you agree with, and there you are facing the “ad hominem dilemma.”

Many women were dismayed by the outcome of the Jiam Ghomeshi trial.  It seems pretty obvious that consensual sex does not mean you are consenting to be choked and punched in the head,  but how the obvious was represented at trial was anything but clear.  Ultimately, the acute “ad hominem dilemma” has been provoked not by Ghomeshi himself (okay, being an anus is not a provable crime, but still he has been proven an anus) or by his accusers, but by Marie Henein, Ghomeshi’s lawyer.




Marie Henein should be a feminist icon, a heroine for all womankind, a tough, skilled, astute defence lawyer at the peak of her profession.  In fact, she is all those things and has become them by defending people accused of some pretty heinous crimes, including crimes against women--because that's what defence lawyers do.  Both Michelle Hauser in the Whig ("Mansbridge hit journalistic low point") and Tabatha Southey in the Globe ("Upset about the Jian Ghomeshi verdict? Don’t get mad – get informed") have broached the dilemma which Henein has provoked

The issue of my concern will seem trivial, insignificant and certainly pedantic by comparison to the justice system's futile struggles to prosecute sexual assault.  The object of my obsession is the course plan; what is usually referred to in colleges and universities as the syllabus (the “silly bus” that carries students from the beginning to the end of the course?).  Who cares about syllabi?  Well, I guess people of my ilk who know how to pluralize "hippopotamus"--pedants (which is generally an insult even though it just means "male teachers.")

I used to really care about course plans . . . a lot.  I didn't call them course plans or syllabi, I used to call them "the contract" and I would do this really pumped-up, earnest presentation in the first class explaining that this document was a contract between me and my students, that they had the right to object and make changes if they could persuasively argue that something I was requesting was unreasonable or there were better alternatives.  If the first class and "the contract" went well, chances of the course as a whole going well were vastly improved.

Then the worst happened. University administrators began to agree with me that course plans were really important.  The Chair of our department announced a new policy. In the name of providing the best possible education to our students, in future we would all submit our course plans for review at the beginning of each semester.  My colleagues and I objected to this new policy on three grounds:  1) it was redundant; the information that might concern the department was already available in the form of course descriptions which were regularly updated, 2) the requirement to submit a more detailed description of what we would be doing with students to an administrator seemed more like surveillance than pedagogy, and 3) it would lead to bureaucratization, the uniformisation and rigidification of all course plans.  Redundancy was undeniable, but we were assured that in no way did this new policy suggest increased surveillance or bureaucratization.  The new policy was implemented.

The first time I submitted a course plan, the department Chair took me aside--at the department Christmas party--to tell me she had reviewed my course plan and determined that I hadn't scheduled enough classes for one of my courses.  I had been teaching the course for ten years and the number of classes had always been the same.  How was this not surveillance, I wondered? A year later, under a new Chair, I was notified that the same course plan contained one too many classes.  Luckily for me, as a tenured professor, I could and did blithely ignore the instructions in both cases.  

A more damaging outcome for me was the bureaucratization of the course plan.  With each passing semester I received increasingly insistent and precise instructions on the form and content of each course plan circulated through the Faculty of Education and seconded by my own faculty. The upshot was that as I presented my course plan to students I realized that what they saw before them was a replica of every other course plan that had been presented to them that week. The chances that I could credibly describe the plan as a mutual contract were nil. Even the possibility that I might convince the students there was something distinctive in the syllabus, something worthy of their concentration and interest, was minute at best.  They would view the course plan as bureaucratic red tape, imposed as much upon me as it was upon them, and they weren't wrong.  In the name of "providing the best possible education for students," I was deprived of a useful pedagogical tool.



In recent weeks, reading reports online about Roberty T. Dillen Jr., an associate professor of "genetics and evolutionary biology at the College of Charleston," who was facing suspension for refusing to change his course plan for the university's suggested course "outcomes," I thought "a messiah, a Prometheus willing to sacrifice himself to give fire to university teachers everywhere!"  I read the article in which his Dean accused him of playing "Silly, Sanctimonious Games" and described complaints against Dillen Jr., including his self-confessed, impish penchant for deliberately misinforming students and refusing to answer their questions. Then I read Dillen Jr.'s defense of his resistance: "Why I’m Sticking to My ‘Noncompliant’ Learning Outcomes."

My ad-hominem dilemma:  despite my conviction that course plans should be the purview of teachers not administrators, everything that I have read (especially his own words) leads me to the conclusion that this Robert T. Dillen Jr. is really an ass.  His only motivation seems to be that he likes being an ass and his pleasure was redoubled by the fact that he could get away with it.   As a tenured professor he can be an obfuscating, obstreperous lump of inertia who doesn't even have to logically defend himself and no-one can do anything about it, or so he thought.

Dillen Jr. has been teaching for 34 years.  He was consulted, advised, warned, and presented with alternative "outcomes" which he rejected. Still he manages to feign bewilderment, as if he were the only calm rational mind in this brouhaha rather than its provocateur, and asks rhetorically:  "How could such an apparently minor disagreement escalate so far, so fast?"

I am irked, in the first place, because Dillen Jr. could not have done a better job of undermining all university teachers in their efforts to control the presentation of their own courses.  When university administrators argue that the syllabus must be administered by the university and not left in the hands of eccentric egg heads, Dillen Jr. will be the precedent they cite.

But I am also outraged by a university professor's vain display of elitist, aloof, opinionated incoherence.  In lieu of "course outcomes," in his syllabus, Dillen Jr. inserted a quotation from a speech given by Woodrow Wilson at Princeton University in 1896.  In his apologia, Dillen Jr. offered three justifications for use of this quotation as the learning outcome of a biology course:  1) he and Woodrow Wilson were born 10 miles apart, 2) both he and Wilson "were Presbyterian professors"  and 3) that Wilson "seems to be so universally despised." 

Here is the Wilson quotation which Dillen Jr. used as his "course outcomes" and cannibalized for his rhetorical self-defence:
Explicit Learning Outcome. "It is the business of a University to impart to the rank and file of the men whom it trains the right thought of the world, the thought which it has tested and established, the principles which have stood through the seasons and become at length part of the immemorial wisdom of the race. The object of education is not merely to draw out the powers of the individual mind: it is rather its right object to draw all minds to a proper adjustment to the physical and social world in which they are to have their life and their development: to enlighten, strengthen, and make fit. The business of the world is not individual success, but its own betterment, strengthening, and growth in spiritual insight. ‘So teach us to number our days, that we may apply our hearts unto wisdom’ is its right prayer and aspiration."— Woodrow Wilson, 1896
Beyond the ludicrousness of his justifications, the gross absurdity of Dillen Jr.'s using this quote as the cornerstone of his refusal to accept and adjust to authority is that the quote and the Princeton Commencement speech from which it is taken and even the Bible quote which it cites (and Dillen Jr. re-cites) are all explicit refrains of the theme that the individual must accept and submit to the direction of higher authorities, including "the social world in which they are to have their life"--exactly what Dillen Jr. is refusing to do.

No-where in his exposition does Dillen Jr. show any interest in what his students might (or might not) be gaining from his stubbornly repeated use of Wilson's quote (encouraging Princeton grads to enlist for the Spanish-American War) for his "course outcomes."  The university's decision that Associate Professor Robert T. Dillen Jr. "would be suspended without pay for the fall 2016 academic term" strikes me as a set back for all good teachers and a gift to the students of genetics and evolutionary biology at the College of Charleston.


Addendum

Princeton University decides to remove Woodrow Wilson's name from its building because of racist history.



Monday, 21 March 2016

The Art of Complaining

“Complain, complain, that’s all you do
Ever since we lost!
If it’s not the crucifixion
It’s the Holocaust.”
L. Cohen

In my brief (five years) and tiny tenure as an administrator responsible for an array of university programs, one of my duties was to receive student complaints.  Students usually had real or at least honestly perceived grounds for complaint.  The typical complaint was about the quality of instruction or the instructor of a particular course.  Frequently, the student would announce a shift of discourse with the phrase “It’s not the reason I’m here, but . . . .”

The irony of the situation was that if a student wanted to complain about a grade or even the evaluation of a particular assignment, that was a situation I could easily deal with--and that was the point students would take twenty minutes to get to.  The university had rules and procedures in place for reassessing a mark.  As I discovered the hard way, the university provided no legal means for dealing with a lackluster or incompetent teacher.  Like the psychoanalyst of the how-many joke trying to change a lightbulb, I could only change an instructor if he/she wanted to change.

Being faced with complaining students reminded me of early days as a steward in my ESL teachers’ union.  The principal duty of a steward was to represent and counsel teachers through the grievance procedure, and we were given a weekend-long course on how to grieve (the legalistic verb for “to complain,” not a synonym for “to mourn”). Step one and rule one of the grievance process was to know what my brother and sister union members wanted; that is, what outcome, of a limited number of possibilities, they were looking for from the grievance.  Sounds simple, right?   I found this advice to be logical, compelling and useful, but the objective is what people most frequently lose track of in the process of complaining.  This lack of focus is, I believe, what gives complaining a bad name.



Decades later at a university department meeting, one after another, my colleagues were complaining bitterly about how prolific and quick students were to complain.  I interrupted the brouhaha to suggest that complaining was a good sign; it meant students cared and, furthermore, I was thinking of preparing a module on “how to complain” for one of my courses.  My colleagues were not amused.

I really believe that complaining is beneficial, that we all benefit from those who have the wherewithal and courage to complain.  They are the whistle-blowers of everyday life, but the problem with complaining is one of degree, of frequency, of being identified with the “boy who cried wolf” once too often.  The conundrum for the would-be complainant then becomes the proverbial “separating of the dancer from the dance”: how to complain without being a complainer.  Although I was told when travelling in Europe that it paid to pretend to be French because the French were known to complain and would, therefore, get good service, I was never able to empirically verify this hypothesis--but it makes sense.

I have also been warned that my attitude toward complaining as being about outcomes was masculinist.  (Excuse the gender stereotyping here.  I’m just the messenger.)  I have been informed that when a woman is complaining, a man’s suggesting a (pre-emptory and perfunctory) solution to a problem simply compounds a woman’s frustration and irritation.  It took me a while to understand this instruction, but I have come to recognize the universal principle that the less you know about a problem the easier it is to imagine a solution.  If you (and I) immediately see an obvious, quick and easy solution to a problem being presented to us, chances are we have failed to understand the details and recognize the complexity and intricacy of the issue.

There is a phenomenon that is usually identified as complaining but is really “self-expression”—as vague as that locution is.  Sometimes it is necessary or at least healthful to decompress, to vent, to exhale with expletives.  What passes for complaining is often just thinking out loud.  Sometimes we just need to hear our own words (in my case read them) in order to clarify our own thinking to ourselves.



I used to be a fan of the television series House.  Dr. Gregory House, out of context, always sounded like he was complaining, but he was carrying out a process of “differential diagnosis.”  I didn’t quite know what that meant until I read Crilly’s definition of “differential calculus.”  Both cases are studies of change:  what has changed, what needs to change, the speed of change, the meaning of change, the prognosis and prescription for change.   Complaining is a differential science and a differential art.



Thursday, 17 March 2016

“Let’s End the Myth that PhDs Are Only Suited for the Ivory Tower.” Really! Why?

Let’s End the Myth that PhDs Are Only Suited for the Ivory Tower.”  This was the headline for an opinion piece in the Globe and Mail written by Queen’s University’s Dean of Graduate Studies.  The article reminded me of meetings our tiny caucus of English teachers used to  have once or twice a year with our faculty’s Dean.  Invariably, at some point in the meeting, the Dean would turn to my colleague who was responsible for our section’s graduate programs and ask:  “How many new admissions do you have for next semester?”



Everyone in the room knew, with the possible exception of the Dean (and I suspect he may have known as well) that, at this point, we had two or maybe three new admissions.   Invariably my colleague would look surprised and begin to shuffle papers.  Having briefly occupied his position before he did, I had a rough idea of the bafflegab he was preparing to deliver, but I was never as good or practised at it as he was.

“Well,” he would begin, “in order to give the most up-to-date numbers I would have to include the inquiries that the secretary passed on today. With those six, and two students from France who emailed, and of course we have identified eight of our own BA students, and two MAs, as well as the returning students, and that’s right, Theresa Somebody, a really excellent candidate will be coming in as soon as she gets confirmation of her funding request . . . .”

Thanks to the miracle of my colleague’s loaves-and-fishes rhetoric,  the Dean could claim that our two new admissions appeared to be somewhere in the neighbourhood of twenty-two.  As long as no-one insisted on knowing accurate numbers, we all had plausible deniability when, next semester, our graduate seminars turned out to be the size of tutorials.

The “Let’s End the Myth” piece in the Globe is an extension of the same desperate smoke-screen rhetoric designed to dissuade anyone from asking for accurate numbers.  

Sure, let’s end the myth that PhDs want to be professors, and at the same time we can get rid of the myth that people who go to medical school want to be doctors, or that people who go to law school want to be lawyers, or that people who study accounting want to be accountants.  Or, we could go the other route, and take a look at accurate numbers for the occupational outcomes of PhDs and deal with the situation we know exists.

Before we get to the big question, we should stop to consider that—at best—only 50% of people who start a PhD in the humanities actually finish.  The average length of time to complete a PhD is seven years, the mode is ten.  Of the lucky 50%, how many, after five, seven, or ten years of hard work and study, get the tenure-track university positions which at least 86% of them have declared they covet?   And the answer is . . . wait for it . . . we don’t know!

How can we not know something as basic as how many PhDs get tenured jobs?  Just like my colleague who had to hide the fact that we only had two new students, universities in general have to hide the dismal outcomes for PhDs.  To reveal the numbers would put courses, programs, prestige, credibility, funding and ultimately positions at risk.

What do we know?  According to the available statistics, which are somewhat out of date (from 2011) and optimistically inaccurate, 18.6% of PhD graduates got full-time teaching positions in universities.  “Full time” does not mean permanent.  Crunch those numbers!  You start with 100 PhD students, at best only 50 of them complete the hard, five-to-ten-year slog and successfully complete the degree.  Of those 50 successful PhDs we don’t know exactly how many but we do know that fewer than nine (9 of the original 100) got the tenure-track university jobs which, for the great majority of them, were the goal of the PhD in the first place.

Given these depressing outcomes, you might imagine that universities are working hard to increase the number of tenure-track positions or downsize doctoral programs or a bit of both.  On the contrary, from everything I have read, the dominant priority of universities is still to maintain or increase PhD enrolments while hiring small armies of underpaid and underprivileged adjuncts, sessional and part-time lecturers to do most of the teaching.  Why? You might well ask.

Crudely put, screwing over PhDs has for decades been the national sport of academia.  For the university as well as individual departments and programs, the PhD student is a cash cow for government funding.  Additionally, PhDs are still a mark of prestige.  Universities identify themselves as either PhD-granting (one of the big boys) or non-PhD granting (not so big) institutions.  Individual professors applying for research grants will have their prospects vastly enhanced if they can point to PhD candidates who will serve as their research assistants.  The lucky professorial few who win the grant money can use it to travel to conferences in Miami, Mexico and Honolulu, while the bull work of their research projects is carry out by their minimum-wage PhD research assistants.

Despite the foggy and evasive arguments that universities might put out suggesting that the problems and the solutions are incredibly complicated—they aren’t.  The solution isn't for MacDonald's or Rogers or Walmart to hire more PhDs.  The PhD has to be the minimum requirement for teaching in a university (with rare, obvious and fully justified exceptions) and for any other significant position within the university hierarchy for that matter.  Any PhD holder who is teaching at a university should automatically qualify for tenure.  The ballooning administrative and support budgets of universities need to be transferred to pedagogical objectives.

As I pointed out in an earlier post “universities have a vertical monopoly, being both the exclusive producers and major employers of PhDs.”  It’s time universities began to acknowledge and correct their abuse of this monopoly.




Friday, 11 March 2016

If You’re One of “the Good Guys,” Do You Still Have to Worry about the FBI Accessing Your iPhone? With Addendum.

In some ways, we have not completely escaped the prejudices of our oral ancestors.  There is always a lingering suspicion that someone demanding privacy must have something to hide.

Last week the Director of the FBI was on television arguing for the agency’s right to unlock the particular iPhone used by the ISIS-inspired San Bernardino terrorist—and by extension all iPhones.  His justification is that we are “the good guys” and we’re trying to catch “the bad guys.”  It’s hard to imagine a weaker a priori argument for the simple reason that in the history of governments, tyrannies, military juntas, secret police forces, and dictatorships there has never been one that announced to the world “we are not the good guys!”.

Nonetheless, personally, I have nothing to hide, and I'm a Canadian with a very non-ISIS sounding name and a regular readership of less than a dozen people for this blog.  (I am proud to have a select and discriminating readership.)  The ultimate defense against being surveilled by the FBI or some other secretive police force is to remain irrelevant and insignificant.  I have nothing to fear, nor do you, right?

Still, it rubs me the wrong way that it is exactly police forces like the FBI  that insist on the importance of secrecy for themselves which challenge the rights of individuals to have secrets.  I start thinking about the people who probably thought of themselves as one of "the good guys" (in the current, colloquial, gender-neutral sense of the term "guys") who were unfortunate enough to cross paths with the FBI, then I realized that you are only one of "the good guys" until the FBI decides you're not for whatever secret reasons they might have.

Consider some famous cases.



Ernest Hemingway, the renowned American novelist, was hospitalized for six weeks in the psychiatric section of St. Mary’s Hospital in Rochester, New York, where he was receiving electroshock treatments. Hemingway was diagnosed as suffering from paranoid delusions because of his constant ranting that he was under surveillance by the FBI and that even the hospital phone was tapped and his nurse, named Susan, was working for the FBI. One week after he was released from hospital, Hemingway shot himself.

“Fifty years after his death, in response to a Freedom of Information petition, the FBI released its Hemingway file. It revealed that beginning in the 1940s J. Edgar Hoover had placed Ernest [Hemingway] under surveillance because he was suspicious of Ernest’s activities in Cuba. Over the following years, agents filed reports on him and tapped his phones. The surveillance continued all through his confinement at St. Mary’s Hospital. It is likely that the phone in the hall outside his room was tapped and that nurse Susan may well have been an FBI informant” (Hemingway in Love 167).



Sunil Tripathi, a 22-year-old Brown University student, committed suicide after the FBI released surveillance photos of the Boston Bombers, and Sunil was falsely identified as one of them.  His body was discovered in the Seekong River, April 23, 2013.



Monica Lewinsky was a 23-year-old Washington intern when she engaged in various kinds of sexual activity with then President Bill Clinton.  Whatever moral compass you might bring (or not) to Lewinsky's tryst with the President, it seems obvious that the affair did not constitute a crime or a threat to public security.  Nonetheless, on January 16, 1998, Monica Lewinsky was held in a hotel room by FBI agents and threatened with 27 years of imprisonment if she did not reveal the details of her relations with the President.  She was also told that the FBI would arrest her mother who could be imprisoned for two years  (http://law2.umkc.edu/faculty/projects/ftrials/clinton/lewinskyday.html). 

(Whenever I reflect on this kind of prurient political theatre, I think of Prime Minister Pierre Trudeau's 1967 declaration in anticipation of the Omnibus Bill that "There's no place for the state in the bedrooms of the nation."  Someone needs, once and for all, to declare the converse: "There's no place for the nation in the bedrooms of the state.")

December 21, 2001, Martha Stewart propitiously sold stock in a friend's company and thereby avoided a potential loss of $45,673--a minuscule amount considering her estimated wealth at the time was 700 million.  Her friend, Sam Waskal, was being pursued by the FBI for insider trading on the stock of his own company, ImClone.  Martha Stewart was never convicted of insider trading but she did serve five months in a federal prison and two years probation for lying to the FBI about details of the stock sale (http://coveringbusiness.com/2012/05/15/what-martha-stewart-did-wrong/). 

(I still can't figure out exactly what crime Martha Stewart committed if, in fact, she did commit a crime, but it's hard not to compare her case with Wall Street companies which lost hundreds of billions of dollars in what seemed like fairly obvious mortgage and bond fraud schemes and the result was that they were bailed out by taxpayer money, CEOs continued to receive bonuses and severance packages, and not a single Wall Street insider was ever charged with a crime.)


Addendum

Now perhaps we should include former Secretary of State and presidential candidate Hillary Clinton in this list!




Saturday, 5 March 2016

Privacy Versus Security: Debating a False Dichotomy

Is privacy necessary?

Is privacy really an innate human desire?  Is it normal to want to be alone?  While it seems intuitive and logical to assume that our culture and technology have evolved in response to a basic human desire for privacy, anthropologists, as well as communication and cultural theorists have argued that the cause and effect are the other way around.   Our habits, customs, created environments and mindsets are not a response to a primordial human need.  Technological culture created the idea of and need/desire for privacy.




Oral culture

In oral societies (that is, societies which depended on direct person-to-person oral communication), the desire to be alone was immediately identified as a symptom of illness.  In a world dominated by orality, today’s millennial otaku introvert generation would have fared as either deities or as mad demons.  They might have become the oracles living in caves at Delphi or the first monks dedicating their lives to transcribing ancient scripts or they would have been imprisoned, starved, tortured and burned at the stake.  We should also consider, given cultural ecology’s displacement of natural environment, that the neurodiverse, digi-destined, screen-slaver generation might be the next step in the evolution of our species.

Privacy is a byproduct of visual culture

Privacy is a byproduct of the visual culture created by the development of literacy from basic forms of writing to the phonetic alphabet, to Gutenburg’s printing press to the digital universe we know today.  Reading meant it was possible to be alone and still be connected to the world in important, informative ways.  In fact, the most serious forms of communication and knowledge-gathering were, in this new visual/ literate culture, best done in solitude.  In an oral culture being alone meant you could only be talking to yourself or a god—both of which were suspect if not dangerous activities.

Compartmentalized living

Living in spaces that have one room for cooking, another for sleeping and another for gathering might seem “natural” to us now, but our early ancestors would be mystified by our insistence on compartmentalizing our daily activities. Primitive man might have agreed with the dysphemistic adage that “You don’t shit where you eat,” but beyond the scatological, compartmentalized privacy is cultural not natural.

No doubt our primitive ancestors at times needed to be out of view, literally in hiding from enemies and predators, as a matter of security. Hence the overlap and confusion between privacy and security, between solitude and survival.

A Gun or an Iphone:  Which is more dangerous?

Fast forward to the debate between the FBI and the Apple Corporation about unlocking the iPhone once used by the ISIS-inspired murderer who killed 14 people in San Bernardino. On the surface, the request is to access one iPhone, but the reality is clear that the FBI is asking for the ability to access all iPhones.

The debate is being couched in terms of individual privacy and public security but this is a false dichotomy.  All things being equal (and they never quite are) security trumps privacy.  (And the pun is intended since Republican presidential aspirant Donald Trump [a.k.a Drumf] has already declared that all Americans should boycott Apple.)  History has proven over and over again that this debate is between individual security and collective security; a debate closely tied to the more typical dichotomy of individual rights versus collective rights. In the American context the priority line between collective versus individual rights and security tends to slide around like the dial on old-fashion radio gone wild depending on the issue--abortion, gun ownership, medical insurance, seat belts, drugs, homosexuality, same-sex marriage, civil rights, equality for women, and so on. During debates for the Republican presidential candidates, President Obama was chastised for using the San Bernardino shootings as an opportunity to challenge the Second-Amendment rights of American citizens to "bear arms."  In this mindset a locked cellphone poses a much greater hypothetical threat to public security than an assault rifle and thousands of rounds of ammunition.

NSA, CIA and you:  Who has the right to have secrets?

In his autobiography, Playing to the Edge: American Intelligence in the Age of Terror,  Michael V. Hayden, former director of the NSA and the CIA, points out that "Stellarwind," the CIA program to gather data on Americans' telephone calls which was outed by Edward Snowden,  “did indeed raise important questions about the right balance between security and liberty.”


In his review/commentary of the Hayden autobiography, "Can You Keep a Secret?", New Yorker staff writer George Packer points out that last week Hayden "sided with Apple in its privacy dispute with the F.B.I." while continuing to tacitly support the CIA's programs of torture and human-rights abuses.

Secrets and safety

In his review, Packer comments:

Spooks in general have had a lot to answer for in the past decade and a half: the 9/11 attacks themselves, Iraq’s nonexistent weapons of mass destruction, secret prisons, torture, warrantless eavesdropping, the bulk collection of Americans’ data, and targeted killings.

With this recent history in mind, it seems obvious that individuals, as a matter of personal security, need to protect themselves not just from malfeasance but the mistakes, the callous indifference, the questionable ethics and the politically/ideologically-dictated overreach of secret and secretive police forces like the NSA, CIA and FBI.






Monday, 15 February 2016

Is Your Professor a Better Grader than Moody’s or Standard & Poor's?

If you have been following the thread of my last posts you will have arrived at the question:  Why did savvy investors from around the world buy billions of dollars of worthless bonds from Wall Street companies in 2008?  The answer is that they absolutely believed the ratings.  If the ratings agencies said that a bond was AAA , they accepted that it was a guaranteed, virtually no-risk investment.  Investors in Germany, Japan, Canada, the USA and all over the globe  willingly or willfully ignored the fact that the ratings agencies—Moody’s and Standard & Poors—were being controlled and manipulated by the very companies they were supposed to be evaluating.




If this situation sounds inappropriate to you, stop and consider for a moment:  who evaluates you when you take a university course?  Yes, you are evaluated by exactly the same person who has a vested interest in demonstrating that his/her course has produced knowledgable, skilled graduates.  On the other hand, every course needs to show distribution of grades, but luckily there are always a few students who conspicuously “don’t give a damn” to whom it is possible to assign lower grades—always useful to take note of who sits in the back row.  Overall, non-permanent lecturers (who do most of the teaching)  are likely at risk of losing their jobs if the grades are low enough to cause protest or produce too many failures.  Among lecturers it is widely assumed that if they give their students low marks, the students will retaliate with low course evaluations.

The difference between the ratings agencies on Wall Street and those who evaluate university students is, I assume, that just about everyone is aware of the situation in universities.  I’ve started asking around (on Quora and Workopolis):  Do employers seriously consider a university graduate’s grades?  I infer from the answers I’ve received that the answer is “no, they don’t.”  The answers ranged from “no, they don’t consider them” to “they shouldn’t, if they know what they are doing.”  So, unlike investors who bought worthless bonds from Wall Street, employers are not being deceived by the ratings systems applied to university graduates.  

What does this fact—the disbelief in marks—mean?  Does it matter that grades don’t matter?

In my world, I mean the world inside my head, they mattered a lot.  Grades and their accompanying justification are supposed to give students the feedback they need to progress, and to make sound educational and career decisions.  When I look back on my own experience as a student, I am shocked by how infrequently I was tested in a thorough and convincing fashion.  Grades were used as punishment in some cases; in others they were gestures of sympathy, at best they were a pat on the back.  I never felt I was being reasonably tested or justly evaluated; nevertheless, I still allowed grades to determine my path in university and in high school for that matter.  A low grade meant that subject would be dropped next year; a high grade determined my next major.  No-one ever gave me clear instructions on what I would have to do in order to get higher grades--and it always seemed unfashionable, humiliating and whiney to ask. Besides, my grades were always high enough to get by.  I never clearly understood how my grades were being determined, what specific criteria were being used to evaluate me, and now that I have had a career as a professor, I'm quite sure my professors didn't know either.

Any experienced professor seeing where this argument is heading will be quick to tell you what I am on the verge of suggesting is just not possible.  The system does not allow professors to evaluate students in the clear and comprehensive fashion I am trying to imagine.  Moreover, it never has.  As a consequence it is typical for professors to separate themselves from the entire business of grading.  Marking is turned over to students; marks are arrived at in some comfortable non-judgemental fashion, or avoided in favour of pass/fail "exams" which no-one ever fails.   Many professors, myself included, feel that their jobs are to inform and encourage students, not judge them.

On the other hand, there is no quality control system for university degrees.  The assumption is that this work is being done by the people who teach, but at the same time these teachers are under constant pressure, from university administrations as much as from students, to give good grades.  There is no up-side to teachers' diligently, conscientiously and rigorously evaluating their students--except perhaps the silent pride which comes from the conviction that you are doing your job. The periodic evaluation of students should be an important  part of the educational process.  Grades are a reflection of the underlying education that students are getting (or not getting). The problem isn't in itself that grades are being inflated (and I don't doubt that examples of unjustly harsh evaluations are numerous--exceptions which prove the rule), but the constant growth of grade inflation has correlated to a corresponding devaluation in the worth of university degrees.

The discussions of the "housing bubble," the "financial bubble" and the possibility of an "education bubble" have gone on for years now.  Grade inflation in and of itself is not a great concern, but the fact that it has reached the point of making grades meaningless is a sign that the "education bubble" may have already burst.





How We Train University Students to Write Poorly (with Addendum)

When I was in the hunt for a tenure-track university position, I attended a mentoring session on how to publish led by Linda Hutcheon, who w...