On Academic Assessment

How’s that for a dry title?

I just completed my department’s senior research assessment report, and it was a little bit eye-opening. Assessment is the big new thing in education–if your college wants to be re-accredited, you damn well better show that you’re doing assessment–and I’ve been torn on the issue.

One the one hand, as a policy guy I firmly believe in assessment: Failure to assess policies and procedures is a gold-bricked path to accomplishing nothing of value.

On the other hand, these reports are generally not reviewed in a meaningful way and generally end up sitting on some administrator’s shelf as “evidence” of assessment, regardless of what is actually in them and whether it’s ever been acted upon. In addition, nobody seems to have a clear idea of how to assess academics–in my research on the topic when I first had to deal with this, I found article after article that said, “there is no one right way to do assessment,” followed by lots of mindless drivel couched in the jargon of teacher education programs. None of the citations were to actual studies, but to what other people who also hadn’t done actual studies had said. (When enough people say enough baseless things, you eventually have a body of literature you can reference.) To help us, my college brought in an “expert” to give a workshop. Some people were enthusiastic, but I heard nothing from him that suggested he actually understood the policy assessment literature (for example, he said nothing about being sure to measure output instead of input). As a third point of unease, we teachers are being asked to spend an ever-increasing amount of time dealing with administrative details; collectively they are substantive enough that they are cutting into time for classroom prep and/or research/keeping up with the literature of the discipline. An assessment without any real methodological grounding, to be stuck on an administrator’s shelf somewhere, seemed like the perfect waste of time.

But not necessarily so. I forgot to think about who the main audience for assessment is–myself and my departmental colleague. The real point isn’t to have someone from outside review our assessment and tell us what we’re doing well, what we aren’t doing well, and how we should improve it, but for us to see that.

My colleague (who is exceptionally broadly read within the discipline, but whose primary lacuna is the policy literature), hated the whole assessment thing even more than I did. His response to the argument that we need to assess student learning was, “I know when they’re learning; they stop saying stupid stuff.” It was good for a laugh, and I agree, but it’s not particularly helpful because it doesn’t give us actual data to work with. The assessment protocol we developed does.

The Protocol
Some departments have taken a content-based approach to assessment, using a standardized test to measure students’ acquired knowledge. I like that approach, but it’s not suitable to our discipline because it’s so broad and we have a fairly minimal core curriculum in our department. So we took a skill-based approach, and used student performance in the senior research as our point of measurement. For our discipline we think this is superior to a content-based approach because in our discipline a) there are so many diverse content sets, b) content knowledge is not directly translatable to career success (in most cases), while c) the skills they develop are translatable across multiple disciplines and career paths.

The measurements are:

  1. Objective 1: Students will be able to choose an original research question, asking an interesting question. This may be either an interesting empirical question, or an interesting theoretical question.
  2. Objective 2: Students will be able to demonstrate familiarity with the relevant literature, through a professional-style literature review.
  3. Objective 3: Students will be able to gather relevant data/information enabling them to answer their research question.
  4. Objective 4: Students will be able to organize and meaningfully analyze the data to provide an answer to their research question.
  5. Objective 5: Students will be able to present their analysis clearly and persuasively in writing.
  6. Objective 6: Students will be able to verbally present their work clearly and persuasively in a public presentation.

For each of these we rate students as “unsatisfactory,” “satisfactory,” or “distinguished.” Each of the categories has a standard, such as,

Satisfactory: The student has written a satisfactory literature review, both in form and in the demonstration of familiarity with the literature, either in breadth or in depth.

Disturbingly we were asked to write standards for our standards. I inevitably thought of the line from Brazil: “I’m having complications with my complications,” and I asked, “Will I also need to write standards for my standards for my standards?” That didn’t go over well–neither the humor nor the real import of the question was apparent to the administrator making the demand. But in the classic bureaucratic tradition, we shirked on accomplishing that, and the issue seems to have been forgotten.

Usefulness
What this protocol does is allow us to keep track of the proportions of student success on each measure.This all actually turns out to be quite useful to us. We now have more solid data to back up our correct but imprecise notions of what students were achieving and what they weren’t. This allows us to really discern, beyond general feelings and impressions, what skills our students are really developing and which they aren’t. And that positions us to focus specifically on the problem areas and try to figure out solutions to them. For example, our students seem to lack understanding of how a professional-style paper is structured, so we have implemented a departmental rule that each course have as part of its reading set, at a minimum, a number of research articles equal to the course level (e.g., minimum of one full research article for a 100 level class, etc.). This means not redactions or excerpts, but full articles. We hope that repeated reading of professional-style articles will develop familiarity with the form.

We didn’t mean to use the data this way–we thought the whole thing was something of a joke, but once the data is there, it’s actually useful, and in writing up the report it’s hard (at least for a policy-minded guy like me) to not think about it. In fact the whole thing is still something of a joke, in that at the institutional level the act of producing the document is more significant than the act of making use of the findings in the document, but that’s both true for policy assessment in all domains and irrelevant to my department’s purposes. Ultimately the effort is as useful as we decide it will be.

This doesn’t mean we’ve found “the” proper protocol. It is true that there is no one way to do it, but that’s only trivially true. Sure, Political Science assessment necessarily differs from assessment in Art (in fact the first two examples I was directed to were Art and Creative Writing–not useful models), but assessment of military performance necessarily differs from social services assessment. But “no one right way to do it” doesn’t mean there are no principles, and this is where the literature on academic assessment has suffered from being written education professionals rather than professional policy analysts.

Despite the lack of understanding in the literature, there are some consistent principles that can be drawn from the policy literature. Above I mentioned measuring outputs rather than inputs–that’s basic, but frequently violated. Another is baselining and benchmarking–figure out your baseline, what you’re currently achieving, and what your benchmarks for determining improvement will be. That depends on understanding what your actual goals are. This is surprisingly difficult at times, as the easiest things to measure aren’t necessarily the real purposes of your organization. A classic example is state highway departments measuring miles of road paved, but that’s actually an input, not an output. Just paving 1,000 miles of road does not demonstrate accomplishment, because it doesn’t demonstrate you improved the quality of any roads. Actual measurements of road (and bridge) quality would be a better measure, but obviously more difficult to come by. And it’s because the actual goals differ that the assessment protocols differ. While I think all art students should develop analytical skills, that’s probably not what the Art Department’s primary purposes and goals are, anymore than developing drafting or sculptural abilities are part of my department’s goals, even though I think all my students would benefit from developing such skills.

In our case we were lucky to have a small department in which the two of us agree on what we really want our students to achieve, which is overall analytical ability above pure content knowledge. And we could figure out, or at least rough out, actual measures of that–ability to recognize an interesting question, ability to synthesize the literature, ability to collect data, ability to organize and analyze the data, and ability to present a coherent account both orally and in writing. Our actual measurements are inevitably subjective, which is less than desirable, but in this case it’s not a fatal weakness because we both have an informed sense of what constitutes a professional standard on each measure, and because our personal professional interest is to have our students do well (it’s both more gratifying and less onerous to review good quality work). The major drawback is that our subjective standards, while similar, may differ just enough to affect placement of marginal cases in the various categories. The solution to that is for each of us to read each senior project and attend each presentation each year, rather than alternating years, and evaluate them independently. That can (and may) be done, but it requires significantly more effort from each of us.

Notice that I said our “personal” professional interest drives us to give serious evaluations. Our institutional professional interest does not, although our Dean tries to persuade us it does. While this may not be a problem for my department, it could be an institutional problem. While all professors prefer students who perform well, taking time to think about whether they’re achieving standards, in what ways they’re failing, and how the professor can change long-established practices to promote achievement of standards is a process that demands a significant portion of the professor’s limited time and that risks challenging their personal identity as a professor–“maybe it’s not just my students, maybe it’s me“–and nobody is comfortable with that.

In the end, I still don’t care about the institution’s interest in assessment, because I know they won’t come up with any functional plan for actually being effective in causing my department to improve. If they did, and it was to my department’s detriment, I could simply adjust the protocol or fake the results, and they wouldn’t really know because they’re only looking at the measurements I’ve made, which are, as noted, subjective. But it turns out to be useful for my purposes, regardless of the institution’s interests, and that’s both gratifying and annoying. Annoying because I have to admit I was wrong in objecting to doing this, even though much of my reasoning is still accurate. Gratifying because all along I really did believe in assessment, and said so, assuming we actually were doing something meaningful–both in terms of what we were actually measuring and in terms of what was done with it. My pride is a bit dinged, but my professional side is gratified to report that the policy analysts have been right all along.

Advertisements

About J@m3z Aitch

J@m3z Aitch is a two-bit college professor who'd rather be canoeing.
This entry was posted in Teaching. Bookmark the permalink.

34 Responses to On Academic Assessment

  1. Dr X says:

    A couple of days ago I was pondering the question of whether academic-style critical thinking can be taught directly. I tend to think not for many reasons, but I do think it can be cultivated indirectly, but only in some people, depending on both intellectual capacity and personality factors. Actually, my interest in the question was triggered by a commenter at Dispatches recommending specific critical thinking courses, as if critical thinking is a skill that could be taught directly and independently of deep content knowledge. I know some in education believe this, but I have doubts.

    Googling for articles on the subject, I ran across this:

    http://donaldclarkplanb.blogspot.com/2011/01/huge-study-do-universities-really-teach.html

    It’s a reaction to a large study, but I don’t have permission to access the original.

    Reading your assessment criteria, I think that evidence of applied critical thinking is part of what you’re looking at, in addition to communication ability. Those seem worthy capacities for measurement, but is there a baseline for comparison, something showing that the outcome measures represent growth in these abilities?

    Not trying to make your life harder. I do see what annoys you about showing standards for your standards. What that seems like, if I were to treat the request most charitably, might be to show some kind of empirical or statistical soundness to the measures you’re employing. Based on what we do in my field I’d call that a request for the kind of meta-research that belongs to specialized researchers and their work should be used, in-turn, by the teacher-practitioner. It just seems that it could be a much larger than is reasonable request for the classroom prof.

  2. James Hanley says:

    Damn, I had a long reply to Dr. X’s intelligent comment, and it got lost in the ether. And I don’t have the energy or time to reproduce it right now. Maybe tomorrow or the next day.

  3. Lance says:

    I am also in the middle of IUPUI’s Acedemic Assesment reports. Luckily for a Math class the criteria is a bit more concrete. Still they are asking me to make pretty big conclusions based on not very much information. The students have only just completed their first exam and two rather trivial quizzes.

    Sadly, I can often tell by this time how well any given student is likely to fare in the course. Of course giving this information to, say, their academic adviser seems to serve little purpose. But I’m a good little soldier that needs his job and is not tenured so off the reports go.

  4. Michael Heath says:

    Dr. X writes

    my interest in the question was triggered by a commenter at Dispatches recommending specific critical thinking courses, as if critical thinking is a skill that could be taught directly and independently of deep content knowledge.

    That being me I’m compelled to respond. I agree you can’t teach such things abstractly. The books I recommended primarily leveraged the subject matters which we can always apply these lessons to, public policy and electoral politics; especially when it comes to voting. I was formally trained in a more narrow form critical thinking within a repetitive manufacturing environment, both in executive management skills and more manufacturing-centric skills (like quality mgt.). I found the principles taught and learned easily transfer to any subject I’ve studied.

    My promotion of formally teaching this subject in an on-going manner was primarily motivated by an increasing need to generally operate and govern in an increasingly complex environment, with less need for uneducated citizens who can’t think critically and a far greater demand for a population increasingly capable of thinking critically. When it comes to both their job but also our duty as citizens.

    Here’s a great article which reveals a dying breed of American operator and the one which is increasingly required: http://www.theatlantic.com/magazine/archive/2012/01/making-it-in-america/8844/ . I find the two-year payback rule which allows this young lady to keep her job to be both short-sighted and not fully cognizant of the direct and indirect benefits of automating. Besides the increasing importance of thinking critically, I think it’s relevant to note how badly the passive osmosis approach has worked coupled to what I think is fairly simple subject matter. As an ex-math tutor, I think I’d have far easier time teaching this subject than students who weren’t adequately prepared for the math classes they struggled to master.

    I’m empathetic with James’ and his peers’ task. I can’t imagine doing assessments which aren’t tied directly to critical metrics whose results determine the success or failure of the organization and one’s own career. Without such pressure I would think they’d come off as too utopian, abstract, and less enthusiasm for continually assessing one’s processes and continually improving those processes based on such assessment results. I.e., this approach seems rather lumpy and divorced from the day-to-day rather than a continual journey of constant improvement which is wedded to the direct operations of the organization.

  5. James Hanley says:

    this approach seems rather lumpy and divorced from the day-to-day rather than a continual journey of constant improvement which is wedded to the direct operations of the organization

    Yes, it’s more like typical policy assessment, where the policy is in place for a while, then gets assessed to see if it’s accomplished its goals. It’s better than nothing, and some situations may not lend themselves as well to on-going day-to-day review, but it’s certainly a much more limited-value process.

  6. Lance says:

    Residing low on the School of Science totem pole I don’t get to give much input on the assessment of our classes and programs.

    Being a lowly foot soldier does have its rewards.

  7. Lance says:

    Critical thinking is as much attitude and aptitude as it is technique. Some people are very comfortable with rote, uncritical data assimilation. To be fair there is a great deal of value in this type of learning in some fields. Medicine, at least in most non-research settings, is largely a matter of retaining vast amounts of rote information.

    Even the “hard” sciences and complex mathematics require ingesting large amounts of given information. I am often amazed at the abilities of some students to memorize facts and complex processes that they really don’t understand and then give answers to complex problems when they don’t even understand the processes that they have used, by rote, to arrive at their answers!

    When I ask them to give some meaning to the answer they have produced, or explain the process by which they have arrived at the answer, these folks often have no clue what I am talking about. They treat all of science and mathematics as some sort of “black box” that you throw numbers and facts into and an “answer” pops out the other side as if by magic.

    Sometimes these people openly bristle and protest when I try to explain to them how the process they have memorized “works” or give meaning to the answer they have divined. They seem irritated that I am not sufficiently impressed that they have arrived at the “correct” answer even though they have absolutely no understanding of the process by which the answer was derived or any insight into its over all meaning.

    Certainly there are basic skills involved in critical thinking but you can’t instill curiosity into an incurious person.

  8. James Hanley says:

    you can’t instill curiosity into an incurious person.

    “[Students’] level of curiosity has declined over the past two decades, said Clayton M. Christensen, a professor of business administration at the Harvard Business School.”

  9. James Hanley says:

    Re-creating my original (lost in the ether) reply to Dr. X.

    Reading your assessment criteria, I think that evidence of applied critical thinking is part of what you’re looking at, in addition to communication ability. Those seem worthy capacities for measurement, but is there a baseline for comparison, something showing that the outcome measures represent growth in these abilities?

    No, unfortunately there’s not. Departments that can do a content-based assessment and that bring in most of their majors as frosh (and funnel them into a particular required introductory course) are well positioned to do a pre-test/post-test method so that they can definitively say whether their students learned something or not. I’m rather jealous of them. Not only do we think a content-based approach isn’t really appropriate for our discipline (although I think it could be done), but we draw most of our students as a consequence of them having taken one or two courses, so by the time we could identify likely majors it’s too late to give them a real pre-test–they’ve already experience part of the experimental treatment.

    What we’re doing is to compare year over year changes, which implicitly assumes that the students are not fundamentally different from year to year. We deal with a small enough number of students (8-15 research seminar students per year) that we realize that actual year-to-year comparisons may be invalid, but hopefully the moving average will still be meaningful. That means if we see improvement over time, even if there are bad years, we’re assuming it’s not either because a) the college is bringing in better students, or b) our department is bringing in better students (or driving away lesser students). Those are somewhat uncertain assumptions to be sure, and you’ve caused me to think more carefully about them (thanks!). We might want to get their ACT scores as a proxy for their initial quality. It’d be an imperfect measure, but probably the best available. If we found that a certain ACT range tended to achieve at only one level in our first few years, but after making programmatic adjustments the same ACT range tended to achieve at a higher level, it wouldn’t be totally shitty data, I don’t think.

    And getting that kind of insight is ample value from having written this post.

    I do see what annoys you about showing standards for your standards. What that seems like, if I were to treat the request most charitably, might be to show some kind of empirical or statistical soundness to the measures you’re employing.

    Heh, charitable indeed. The College does it because the North Central Association requires it for re-accreditaton. The NCA will never look deeply into it because to do so for each department at each school they accredit would require vast resources. The College looks only as deeply as “can you show that you’re making changes in response to this, and can you show improvement,” and since they’ll never look at the actual data points (senior research projects) to evaluate those themselves, they have to accept whatever data we give them, which we could–where we that type (and I know college profs who are)–could easily fudge. So ultimately the issue of data quality will only be taken seriously, if it is, by the respective departments. Which is all well and good to the extent that they’re the ones whom this process is theoretically intended to motivate to improvement. But as you note;

    I’d call that a request for the kind of meta-research that belongs to specialized researchers and their work should be used, in-turn, by the teacher-practitioner. It just seems that it could be a much larger than is reasonable request for the classroom prof.

    Yeah. Exactly. But this assessment business is still pretty new in academia, and these types of things tend to build over time. At some point somebody should be able to make a research career out of doing that kind of research, and we can all benefit from that. But that will never happen without all this initial aimless wandering, plaintive requests for help, and persistent bitching.

    I just hope that NCLB has made it clear that simple standardized testing is not a good assessment method, and that Congress doesn’t decide to take the simple path and require that of colleges as a condition of their students receiving federal financial aid. I can see that happening, and the thought gives me cold sweats.

  10. Matty says:

    you can’t instill curiosity into an incurious person.

    The real puzzler is how we are so effective at getting the curiosity out, most young children seem to want to find things out but somewhere in the growing up process learning seems to turn into ‘work’ for a lot of people and becomes something to avoid.

  11. Lance says:

    Interesting article.

    Take, for example, the lecture, which came up for frequent shellacking throughout the day. It is designed to transfer information, said Eric Mazur, professor of physics at Harvard. But it does not fully accomplish even this limited task.

    Lectures set up a dynamic in which students passively receive information that they quickly forget after the test. “They’re not confronted with their misconceptions,” Mr. Mazur said. “They walk out with a false sense of security.”

    The traditional lecture also fails at other educational goals: prodding students to make meaning from what they learn, to ask questions, extract knowledge, and apply it in a new context.

    The IUPUI math department has long been attempting to de-emphasize the lecture and encourage “collaborative learning”. I initially balked at this approach. I felt I had a great of information to impart to the students and damn little time to do so, especially in the 3 hour per week courses.

    Were they supposed to “teach themselves” the material, I wondered? Well, now I spend about half of the time lecturing and use the other half of the time with the students up at the board working problems together. The results are mixed but mostly positive.

    Many students don’t like having to participate in this exercise, at least at first. Also I think this method works better in a format where students have one or two lecture periods a week and one or two recitations dedicated to problem solving in groups.

    Although I must say that I believe a program that facilitated students to get together in after class study groups would be more effective. This is what I did in my undergraduate studies. Whether those that favor instituting novel teaching methods agree or not qualified instructors have valuable insights that are not going to be transferred to a group of students fumbling through the material with each other.

    Also I think administrative academics have a tendency to grasp at new trends while devaluing older, and in some cases more effective, pedagogies in the name of innovation, not to mention justifying their salaries.

  12. Lance says:

    As to students becoming more incurious I think it may be due to our society viewing education as a means to an end rather than a goal in and of itself. It seems to me that more and more students see their courses as arbitrary “boxes” that need to be checked off on the way to a degree that will enable them to enter some profession or acquire some job.

    Certainly there have always been these “preparatory” courses of study but I entered the sciences to gain an understanding of the universe .

    (Wow, that looks really pretentious in print!) But still it was my main reason for becoming a physicist.

    I rarely encounter students that demonstrate a real desire to gain insight into the way the world around them works or seek out the beauty and mystery that a proper science education, in and of itself, can reveal.

    Of course it may be a function of my working as a lecturer in the lower mathematics and physics courses that don’t always attract the “best and the brightest”. But even so it seems to me that students are becoming less and less interested in acquiring the tools that would engage them in critical thinking and a deeper understanding of the world around them.

  13. James Hanley says:

    Ditto everything Lance said here. Students do learn better, I think it’s clear, when they are actively engaged in problem-solving with the material. But there are things they need to be told, as well. And I’m not entirely satisfied with the claim that “students are different today, they don’t respond well to lectures.” That may be true, but the reality is that a large portion of their working lives, no matter what they end up doing, is going to consist of them sitting still while somebody gives them a damned lecture, and they need to learn how to listen. I have no idea how to make that happen, though.

    Of course the lecturers themselves have a responsibility to learn how to be interesting–not “entertaining,” but how not to drone, mumble, be didactic, etc. I realized last term that in response to students tending to pay less attention I had gotten louder and more didactic, and that it wasn’t working. This term I have been consciously trying to be softer in tone, more conversational, and to pause more, frequently asking, “what does that mean,” or “why is that,” to stimulate them to stop just receiving and start pondering, and trying to discipline myself to wait for an actual response before saying anything else (that’s really hard–the average teacher gives up waiting after only a few seconds, assuming nobody’s going to answer, when frequently it just takes students more than a few seconds to think through the question). I’ve noticed more student attentiveness when I’m softer and more conversational than when I’m loud and didactic. Or at least casually it appears so, but I’m going on actor’s instinct about audience engagement, and I think there’s a degree of reliability to that.

    The funny thing is, I had a growing awareness of my unpleasant change in tone, but it didn’t really hit me until after my daughter–who had walked over after school and saw me in the classroom–asked, “Why are you yelling at your students?” My response was that it was a large classroom, which it was, but I realized that if it sounded to her like I was yelling, then it very well might have sounded to my students like I was yelling at them, too. And who can seriously pay attention when being yelled at for an hour and a half?

    But one of the things I thought was most interesting in the Harvard article was the claim that repeated testing was a great pedagogical method. So I’m thinking of taking the more rote material for classes like American Government and Research Methods and building quizzes in PowerPoint to cover it, with each quiz on new material also including some questions from prior material, so that they’re faced with each question 3 or 4 times. And I’ll set them so they can be retaken, so students who have any actual concern for their grade will retake them to correct–and learn–what they got wrong before. Of course they will be multiple choice, and criticisms of it be damned. Multiple choice works fine for rote learning, and the only way to functionally accomplish repeated testing is to have it automatically graded. Ideally (we’ll see, it’s an experiment after all), that will allow me to get the rote material across with less class time devoted to it, so that I can use in-class time for problem-solving and analytical thinking.

    I plan to spend a lot of time this summer building that stuff up. Yahoo.

  14. James Hanley says:

    Matty,

    Good question. I think the answer lies in what variables affecting children have changed in the last 20 years. TV? No. Video games? Yeah, but are they diminishing curiosity? Seems to me a lot of them engage it (any that are puzzle solving, and not just shoot-em-up, at least). Our methods of childhood education? Don’t know.

    I wish I knew.

    And sometimes I wonder if it’s really true. I know we faculty members tend to think students were better in the past but I think much of that is one part comparing them to ourselves at that age (and we’re not particularly representative) and one part romanticizing the past.

  15. Lance says:

    Matty,

    The real puzzler is how we are so effective at getting the curiosity out, most young children seem to want to find things out but somewhere in the growing up process learning seems to turn into ‘work’ for a lot of people and becomes something to avoid.

    You said it brother! I often wonder this myself.

    I recently witnessed an incident that demonstrates this phenomenon. It was Super Bowl week here in Indianapolis and luckily the weather was unseasonably warm. I was walking around campus two days before the big event and there were two A-10 “Wart hog” attack aircraft circling the downtown area at very low altitude. Perhaps they were practicing for a pre-game “fly over”.

    Here were these two multi-million dollar war planes darting and banking over head, turbofan engines in full glorious shriek, and I noticed that nearly all of the students walking on campus couldn’t be bothered to even look up!

    I noticed a group of students waiting for the shuttle bus huddled together, each with faces glued to various hand held electronic boxes. They not only didn’t look up, they seemed annoyed by the noise.

    I fear we are becoming a race of addle brained lemmings.

  16. Lance says:

    James hanley,

    And sometimes I wonder if it’s really true. I know we faculty members tend to think students were better in the past but I think much of that is one part comparing them to ourselves at that age (and we’re not particularly representative) and one part romanticizing the past.

    Certainly teachers through out time have had this perception. Also it can be a dodge to defer our own inadequacies onto our students.

  17. James Hanley says:

    it can be a dodge to defer our own inadequacies onto our students.

    Not that you or I would ever do such a thing…

  18. Dr X says:

    Actually, Michael Heath, I wasn’t thinking of you. It was a brief comment from Aquaria that got me thinking and reading. I don’t even think I read the comment you’re referring to. Might not have even been the same thread.

  19. Dr X says:

    At James Hanley:

    What we’re doing is to compare year over year changes, which implicitly assumes that the students are not fundamentally different from year to year.

    That makes sense with the caveats you lay out. And yes to the rest of your explanations and clarifications.

  20. Dr X says:

    @ Matty:

    The real puzzler is how we are so effective at getting the curiosity out, most young children seem to want to find things out but somewhere in the growing up process learning seems to turn into ‘work’ for a lot of people and becomes something to avoid.

    I’m not sure that we are actually destroying curiosity. The variation in diminishment of curiosity as people age might be a function of normal neurological development and variation. Clearly there is adaptive value in the youngest human beings being extreme experimentalists, and neurological development at the earliest ages involves tremendous plasticity, with new neurons and dentrites developing and pared away at a rapid rate that naturally slows down with age, a lot. That probably accounts for things like how easy it is for a 2-year-old to acquire language (my mother spoke two fluently by age 5), while the same task is quite difficult for an adult. But there is variation even in adults, those who retain greater curiosity and those who lose it, exposed to the same classrooms and even the same family. If you consider neural adaptation as a group or clan phenomenon, it might not be so great to have nothing but people who love pondering ideas and tinkering above all other activities. Neurodiversity and specialization is probably quite adaptive for the group as a whole.

  21. Dr X says:

    A couple of other thoughts on curiosity. The people who comment here or over at Brayton’s blog are probably in the top 10% both as far as raw cognitive-ideational machinery and personality-driven inclination toward investigation and analysis. Check out comments in a Yahoo or Fox News threads for contrast. Those threads are probably much more representative of the American population as a whole, and maybe even tilted a bit above average because they are people who at least have an inclination to read and respond in writing.

    Point is that I think bright people tend to be a bit surprised by how unimpressive the intellect of a person with an IQ of 100 is, and that IQ squarely in the middle of the distribution. Half of all people fall below that, and I do think that IQ and curiosity are related. g seems to be about the speed and ability to juggle and move bytes around in working memory. If you can’t do that well, it’s pretty hard to think about the moving parts and relations between ideas. If in turn, you can’t easily make complex connections between dynamic parts, it’s difficult to light an intellectual fire that propels you forward.

  22. Dr X says:

    Okay, one last thing (I hope). I’m thinking aloud now. Toddler and children’s curiosity and neural plasticity is largely about concrete acquisition related to how the world works. Between language and the mechanics of learning up to age 7, there is a great deal that has to be put in place, but it’s a kind of concrete curiosity, just learning that when I do A, B will happen, but it doesn’t entail understanding of the properties behind the rules and laws being discovered by natural experimentation and observation. That kind of curiosity comes later and, I think, depends on the development of new capacities beyond the reach of many people. So when the curiosity needed to establish basic, concrete participation in human society is complete, what happens from there might be limited by constitutional factors.

  23. Lance says:

    Dr X,

    Toddler and children’s curiosity and neural plasticity is largely about concrete acquisition related to how the world works. Between language and the mechanics of learning up to age 7, there is a great deal that has to be put in place, but it’s a kind of concrete curiosity, just learning that when I do A, B will happen, but it doesn’t entail understanding of the properties behind the rules and laws being discovered by natural experimentation and observation. That kind of curiosity comes later and, I think, depends on the development of new capacities beyond the reach of many people.

    I thought about the developmental aspects of the brain of children in response to Matty’s remark.
    I’m glad I didn’t attempt to express my knowledge of the subject since I would have ended up looking like those poor sub 100 IQ slobs on Yahoo in comparison to your highly informed and well written response.

  24. Lance says:

    I find a wry sense of humor to be a marker for overall intelligence. Of course I’m basing this on anecdotal evidence, but I am often drawn to a person’s wit and rarely does that first indicator lead to anything but a highly intelligent person.

    There are of course exceptions but usually those tend to be the failure of the converse. I am sometimes surprised when highly intelligent people do not have a talent or at least appreciation for complex humor.

  25. Dr X says:

    Lance, I think your observation about wry humor may well be true. Wry humor and, sarcasm as well, require a sense of the unspoken; there’s a secondary idea or tension involved but it isn’t explicitly stated. And again, if we think about intellectual power as quickly grabbing and moving multiple ideas around in working memory, then that sort of humor might be more easily accessible and more readily created by brighter people.

    This brings to mind a study that looked at the abilities to both recognize sarcasm and to come up with an example of a sarcastic statement. I’m sure they found that all of the subjects could readily do this.

    http://psychcentral.com/blog/archives/2009/08/18/would-you-even-recognize-sarcasm/

    Actually, no, the abilities vary considerably. I wouldn’t be surprised if intelligence is a determining factor.

  26. James Hanley says:

    OK. who slipped Dr. X the Adderall? ;)

    More seriously, what about the claim that kids today are less curious than kids of 20 years ago? Does that make any sense to you?

    As to sarcasm, I briefly dated someone who didn’t recognize sarcasm. She was bright (a lawyer), nice, reasonably humorous, but could not distinguish between sarcasm and a serious verbal attack. Weird, and hence the exceptional briefness of the relationship.

  27. Dr X says:

    @James Hanley
    blockquote>More seriously, what about the claim that kids today are less curious than kids of 20 years ago? Does that make any sense to you?

    I don’t know what’s going on there–even wonder if that perception accurately reflects a culture-wide phenomenon, or not. Lots of things I could speculate about. Perhaps there is just so much distracting kids that they are less familiar with the process of really sustaining focus on one thing for long periods of time. Kind of the mental equivalent of chronic channel surfing, not investing in a narrative if it doesn’t immediately stimulate, so just look at the next thing that comes up with the push of a button. So rather than developing their own thoughts more deeply, the availability of new external stimulation propels them. Maybe.

  28. Dr X says:

    If that’s the case, it may not be that their curiosity is lacking, it just isn’t moving toward the places teachers would like it to go.

  29. Matty says:

    My thoughts on this are a bit disconnected but I’ll just put them down anyway.

    1. Kudos to Dr X for building a fascinating discussion on my throwaway comment.
    2. Did you know there is a pervasive myth in Britain that Americans in general don’t get sarcasm?
    3. I am very sceptical of claims that children today are different from in the past (my comment before was meant to compare children and adults generally not my childhood with ‘kids these days’) I grew up in the late 80’s/early 90’s and remember spending my time doing things like exploring outside and reading books, while hearing adults moan that children never did such things anymore.

  30. Lance says:

    Matty,

    I also don’t think kids today are any less curious than those of the past. I think they are just presented with a different environmental landscape than the one from my 1960’s and 1970’s childhood.

    I do think that most of them spend less time outside, at least with regard to unstructured activities. When I was a kid there were bands of us little adventurers roaming the neighborhood, parks and local wild places looking for things to occupy our time.

    I rarely see kids doing this anymore. If they are outside they are likely involved in some activity organized by adults such as soccer or scouting. I think parents have become more pre-occupied with both the safety of their children and the need to provide activities that will prepare the kids for their future, such as sports, dance class, music lessons, bible study etc.

    Once summer came I only saw my parents in the morning and maybe for dinner unless I needed a ride or a few bucks for the pool, a movie, bicycle tire tube or whatever activity I scheduled for my day. If I was home before 10:00 pm they were happy and left me to my own devices.

    In some ways I think it’s good that parents are more activity in their kids development but I also think something is lost when kids aren’t allowed to explore the world around them and make their own choices in daily interaction with the world and the other people in it.

  31. Dr X says:

    @ Matty:

    Did you know there is a pervasive myth in Britain that Americans in general don’t get sarcasm?

    That sounds familiar. A different dimension: my impression as an American is that Americans generally tend look disfavorably on sarcasm. Enough so that I try to avoid it unless I know with certainty that I’m speaking with a person who would be amused by it.

    My fellow Americans, am I just imagining that negative view of sarcasm?

    Matty: do you sense that sarcasm is generally viewed negatively in Britain?

  32. Dr X says:

    We really need a preview[[ button.

  33. Lance says:

    Yeah, I hurts to look at a silly error and know you can’t fix it.

  34. James Hanley says:

    Oh, good job with the html, Dr. X.

    Thanks for your thoughts on curiosity. I, too, am dubious that children have changed. I’m generally skeptical that millions of years of evolution can be fundamentally altered in a generation.

    Matty, Americans do have a different attitude toward sarcasm than you Brits. We do tend to like it less, particularly in certain regions (the East Coast is more sarcasm savvy than the west, as a general rule). And I think one of the reasons we Americans tend to not get British sarcasm is that your style is more subtle. We tend to pronounce it a little bit harder, giving a stronger emphasis as a signal of what we’re doing. With you guys, the listener needs to be paying closer attention because it’s so often much more deadpan. In a related vein, you tend to use a sort of inverse sarcasm, underplaying praise. In the U.S., “That’ll do” means, “Yeah, I guess I can settle for that,” whereas you across-the-ponders sometimes use it as high praise–and that’s one that we colonists are particularly likely to not catch. Fortunately I run with the right set of people who do get that, so my “excellent” is generally understood correctly to be condemnation, and my “I guess it’s all right” as strong approval.

Comments are closed.