On November 2nd, 2017, the Debunker Club sponsored a one-hour Twitter debate using the hashtag #DebunkDebate. We have a wonderful, cacophonous dialog in typical Twitter-chat fashion.

The file below contains all the tweets from the debate.

Download Great 70-20-10 Debate Tweet Stream

 

Also, Cara North posted a prettier version here.

 

 

29 replies
  1. Barb McDonald
    Barb McDonald says:

    Has anyone run across or have experience with David Kolb’s Experiential Learning Theroy and the Learning Style Inventory? I’ve recently run across a LinkedIn post where someone is advertising a class on how to implement it. I did a quick search on Kolb and he’s written quite a few books and articles. Most of his work was done in the early 80s, but I’m also seeing current research being done using both the Experiential Learning Cycles and the Learning Style Inventory. I found an article where he replies to Freedman and Stumpf who apparently criticized his Learning Styles Inventory. I am not able to share it here. Full disclosure, I haven’t read all of it, nor have I been able to dig too deeply into it, which is why I’m bringing the question here. Thanks.

  2. Rick Presley
    Rick Presley says:

    I noticed this eminent Ph.D. and Assistant Dean for Education Innovation providing insight on how to apply Dale’s Cone in learning: http://www.queensu.ca/teachingandlearning/modules/active/documents/Dales_Cone_of_Experience_summary.pdf

    What suddenly struck me was how at odds this “research” is with that of learning styles. If learning styles are a thing, then shouldn’t the percentages vary at each level of the cone depending on the individual’s learning style?

    I’m wondering why I’ve never seen any debate about this inherent contradiction from proponents.

  3. Carolyn Stoll
    Carolyn Stoll says:

    Reading an article in Inside Higher Ed in which our campus was featured for its initiatives to improve accessibility for our online content, I saw this little gem:

    …”accessibility efforts …can even serve students who don’t have learning disabilities but learn better from reading text than hearing it out loud…”.

    I see this all the time. Two problems with it. One, it so obviously assumes that learning styles are legitimate AND that the reader knows that and agrees with it. The term “learning styles” isn’t even used and doesn’t have to be. What’s not said is just as important as what is.

    And two, used in the context of accessibility, this kind of statement means precious resources of time and effort are being taken from REALLY making content accessible to PRETENDING to by addressing learning styles.

    This is why addressing myths about why people learn is so important. Real people with real learning problems don’t get helped when we mess around with junk science.

  4. Will Thalheimer
    Will Thalheimer says:

    Daniel Engbar, writing in Slate Magazine, argued earlier this year that the worries about Debunker were overstated. Here is his article: https://slate.com/health-and-science/2018/01/weve-been-told-were-living-in-a-post-truth-age-dont-believe-it.html

    Today Engbar (on Twitter) pointed us to another article which found evidence that another worry about debunking—that too many persuasion attempts might backfire—has been found to be false. It looks like more debunking efforts are better than fewer debunking efforts! Here’s the pre-publication research article: http://www.emc-lab.org/uploads/1/1/3/6/113627673/ecker.2018ip.jarmac.pdf

    I’m sure there will be more studies to come. We live in a time where fake news, misinformation, deception is rampant. Researchers will want to take a look to see what can be done.

  5. Troy Hudson
    Troy Hudson says:

    I am moderating a webinar teaching litigation support people how to write reports. The lovely slide of Edgar Dale’s cone defiled by percentages appeared with a “University of Texas” at the top as if this was a theory endorsed by them. That led me to a series of searches that ultimately landed me on this website.

  6. Carolyn Stoll
    Carolyn Stoll says:

    This piece from Campus Technology features people from my college who are part of the accessibility initiative at our university. It’s shot through with references to Learning Styles as a justification and argument in favor Universal Design for learning. One of the ways you make your learning “universal” is to attend to different learning styles. It’s maddening.

    https://campustechnology.com/articles/2018/09/05/making-etextbooks-more-interactive.aspx?s=ct_le_050918&m=1

  7. Dan Topf, CPT
    Dan Topf, CPT says:

    On today’s CBS This Morning, they presented a story on U of Vermont’s efforts to affect positive choices in their incoming first year students. They said the professor and staff are using neuroscience to do so. They aren’t and it’s very odd. They are using behavioral psychology, behavioral economics, and other cognitive principles, in my opinion. What do you think? https://www.cbs.com/shows/cbs_this_morning/video/WOopsW_X28BMsNGK2S7pyQBn9nLUmZA4/substance-free-dorms-at-university-of-vermont-encourage-wellness/

  8. kentclizbe
    kentclizbe says:

    Neuroleadership institute.

    While “brain science” and “neuro-” seem to be really hot nowadays, the NLI is at the forefront of pushing this belief system.

    https://neuroleadership.com/

    I worked at a large company, in the Sales training group. They totally swallowed the NLI. They did not have a grounded learning program, could not identify what competencies actually were required to be successful in their jobs. They had a pretty LMS, with pretty, flashy elearning, mostly unrelated to the target audience’s actual skills. And they slipped in NLI lingo every chance they could.

    I went to an NLI conference. Every insight NLI offered, while flashing pictures of the brain lighting up in different colors on the screen, came from Adult Learning, Instructional Design, and real learning science.

    I approached the NLI leader at the end of one session, proferred that observation, and asked if he was familiar with real learning research. His eyes sort of glazed over and he muttered “Interesting.” before heading off to engage with his starry-eyes acolytes.

    Will, is there a section devoted to debunking “Neuro-” (fill in the blank) approaches to learning?

    Thanks.

    • Carolyn Stoll
      Carolyn Stoll says:

      You’re right, Will, this is an interesting article. We all should be cautioned because words DO matter.

      BUT…we have to be careful to not veer into censorship. I’m always a little leery of the argument that we have to protect people whose critical thinking skills are not very well developed, be they kids or less educated adults, from viewpoints that might “harm” them or lead them to believe falsehoods. That’s a little condescending to me, and taken in a different context, sounds a little like brain washing. That’s not the point of this writer, I don’t think, but it’s an argument I’ve heard before in defense of censorship.

      All voices should be heard, including ones we don’t agree with or that
      are patently false. Don’t believe for a second that not hearing
      falsehoods or myths means nobody believes them. Only by getting them
      out in the air and then addressing them can they be countered. That’s why I appreciate the Debunker Club. We change minds by pointing to those voices and providing the evidence why they’re wrong.

  9. Colin Geissler
    Colin Geissler says:

    While looking for scholarly studies including meta-analyses for the efficacy of “Emotional Intelligence” training I found a couple studies with some positive results, but there were problems like sample-size and study design that are hard to dismiss.

    What I found more interesting is the amount of churn in the literature about the definition of EI and/or lack of experimental research despite:
    A. The prevalence of the term appearing in PubMed since 1985: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6246631/bin/fpsyg-09-02155-g001.jpg
    B. Comments in the literature such as:
    – November 2018 “This paper discusses one of the most pervasive problems regarding EI-related individual differences, namely, the lack of a meaningful theoretical framework. ” https://dx.doi.org/10.3389%2Ffpsyg.2018.02155
    – April 2018 “Human resource practitioners place value on selecting and training a more emotionally intelligent workforce. Despite this, research has yet to systematically investigate whether emotional intelligence can in fact be trained.” https://doi.org/10.1016/j.hrmr.2018.03.002
    – June 2004 “…this review demonstrates that recent research has made important strides towards understanding the usefulness of EI in the workplace. However, the ratio of hyperbole to hard evidence is high, with over‐reliance in the literature on expert opinion, anecdote, case studies, and unpublished proprietary surveys.” https://doi.org/10.1111/j.1464-0597.2004.00176.x

    I am not suggesting that EI isn’t something real, but I have a feeling all the EI Training that corporations are spending millions of dollars on may be worthy of attention. Does anyone have any good references for this?

    Thanks.

  10. kentclizbe
    kentclizbe says:

    An interesting two-fer spotted on LinkedIn.

    A user posted a link to the infamous Learning Pyramid.

    Another user commented:

    “Frank Lopez Strategic Management at Harvard Extension School
    This pyramid puts a renewed spin on the classic 70-20-10 model — Learning (Lecture, Reading, Audiovisual) is at 20% — Connect (Demonstration and Discussion) at 30% — and Experience (Practice Doing and Teach Others) at 50%. The emergence of CoP and CoE’s playing a larger role in promoting a learning culture.”

    This conflation of two debunked semi-scams was just missing a mention of Learning Styles for a trifecta!

  11. Carolyn Stoll
    Carolyn Stoll says:

    So, this isn’t a myth I’ve heard about, but it sounds dangerously close to being one. The idea is that the optimal score on a test should be 85%. Higher than that and the material was too easy for you; less than that and the material was too hard. The 85% mark hits the “sweet spot” for optimal challenge.

    I like the idea of learning being like a video game: too easy and it’s boring; too hard and it’s frustrating. But attaching numbers like this always makes me uncomfortable. I haven’t read the cited articles in this piece yet, but I will and will chime in again then. In the meantime, is it just my Myth-dar that’s going off here?

    https://blogs.scientificamerican.com/observations/how-wrong-should-you-be/

    • Will Thalheimer
      Will Thalheimer says:

      I just saw the article on this (from three days ago on 14 January 2019), and accessed the original study. The research is NOT really generalizable. They do a study where (and you won’t believe this), “ambiguous stimuli must be sorted into one of two classes.” Where do real learners learn by sorting ambiguous stimuli? This is complete poppycock!

      Unfortunately, the paper has a ton of mathematical formulae, so it will look super credible and people may pick this up…

      Watch for it… We will soon be told to shoot for an 85% cutoff score on our knowledge checks!

      Ugh!

      • Carolyn Stoll
        Carolyn Stoll says:

        I don’t even know what that means…”sorting ambiguous stimuli?” Do they mean…guessing?

        • Will Thalheimer
          Will Thalheimer says:

          I went back to the research article to find you an example. In reading the paper more closely, I found that they didn’t even do an actual experiment, they just did mathematical simulations.

          Here is a description of the kind of task they are simulating. A quote from the article:

          “In a standard binary classification task, a human, animal or machine ‘agent’ make binary decisions about simple stimuli. For example, in the classic Random Dot Motion paradigm from Psychology and Neuroscience [15, 16], stimuli consist of a patch of moving dots – most moving randomly but a small fraction moving coherently either to the left or the right – and participants must decide in which direction the coherent dots are moving.”

  12. Sean Rea
    Sean Rea says:

    A new article on Listening Styles was just published in the Feb edition of TD magazine and I’d like to get the groups thoughts on it. The author speaks in several places in the article about “research” on listening styles, but provides no sources. So, as a good debunker, I go to her website and download the Validation Research paper on their instrument, and of course there is no source info, it just states ” ECHO has undergone rigorous analysis in all four of these categories in conjunction
    with experts at the University of Mississippi.”

    The fact that she talks about the research around learning styles and that we all learn differently got my back up right away. Plus it is a proprietary tool that the article is about. I’ve never heard about listening styles before, so thought I would throw it out to the group. A google search show a few results, but not in a training context.

    Here is the article:
    https://www.td.org/magazines/td-magazine/shhhh-listen

    The author’s website:
    https://www.echolistening.com/

    The research paper:
    https://drive.google.com/file/d/1nEN-EY72kXNMGpogEL2LN9LsUAkjp7PI/view?usp=sharing

    Curious to hear what the group thinks.

Comments are closed.