• Skip to main content
  • Skip to search
  • Skip to footer
Cadence Home
  • This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  1. Blogs
  2. Breakfast Bytes
  3. #learntocode
Paul McLellan
Paul McLellan

Community Member

Blog Activity
Options
  • Subscribe by email
  • More
  • Cancel
computer science
learntocode

#learntocode

21 Mar 2019 • 7 minute read

 breakfast bytes logoWhen a few hundred journalists were laid off recently, there was a lot of activity on Twitter (and probably elsewhere) suggesting that journalists should learn to code.

One thing that I've noticed is that journalists are inordinately concerned with... journalists. If factory workers are laid off, that's just the way of the world. If journalists are laid off, it is a crisis deserving of column-yards of coverage, and a declaration that "democracy dies in darkness". In the case of "learn to code," this was deemed to be really insulting to journalists, to the extent that the phrase would get you banned from Twitter for using it.

 It actually started with President Obama. He didn't explicitly tell laid-off coal-miners to learn to code, but in his document On the Strength and Resilience of Rural America he said:

In Pikeville, Kentucky, former coal miners are trading coal for code. They’re retraining to learn HTML, JavaScript, and PHP, transforming an old bottling factory into a digital hub. It’s a transition that not only supports good jobs, but also offers a glimpse of what the future could look like in other communities like Pikeville.

But when the same thing is suggested for journalists, it is big news.

I Learned to Code

As it happens, I did learn to code. I started when I was 14, studied computer science at university, and ended up doing a Ph.D. in computer science (on a topic that required coding—I wasn't doing theory). Then, for over a decade, coding EDA tools was my job (or managing people coding).

What's more, during my time as a post-graduate, I taught people to code, one of many small group tutors for CS1 (what in the US would be called CS101) over a period of several years. I also taught the Introductory Programming course for our Master's in Computer Engineering. The Master's course assumed you could program, but it was not a pre-requirement. If you could not, then you got a week of me teaching you how to program (in Pascal).  So not only did I "learn to code", but I also "taught to code."

Unfortunately, most people cannot learn to code. By most people, I mean half or more. It's not "most people cannot learn the General Theory of Relativity" where I would guess that would mean 99% cannot, but it is a significant percentage.

Here's one good description of teachers' experience:

Despite the enormous changes which have taken place since electronic computing was invented in the 1950s, some things remain stubbornly the same. In particular, most people can't learn to program: between 30% and 60% of every university computer science department's intake fail the first programming course. Experienced teachers are weary but never oblivious of this fact; brighteyed beginners who believe that the old ones must have been doing it wrong learn the truth from bitter experience; and so it has been for almost two generations, ever since the subject began in the 1960s.

That was certainly my experience too. Of course, I'm not going to claim I was the world's greatest teacher, so some of the responsibility is presumably mine. It is actually really difficult to teach introductory programming once you are an experienced programmer because you can't remember anymore what you found difficult when you were learning. It is much easier to teach an advanced course near the research horizon, as I also did with a course on computer networking for the final year and master's students.

It turns out that programming aptitude is very bimodal. There is a sort of normal distribution of people who can't code and are never really going to learn. And a separate normal distribution of people who can. This is very different from most subjects, where there is a single normal distribution. Some people are not good at, say, organic chemistry, most people are average, and some are very good. The usual bell curve.

One of the problems is that it doesn't seem to tie into anything obvious. There is some connection to prior academic success since programming is an intellectual endeavor. But many first-year students at Edinburgh University, where I taught, struggled. Edinburgh is not a community college, it is one of the universities in the tier immediately below Oxford and Cambridge, with high academic standards for acceptance, so nobody is failing to learn to code from not being smart enough.

It is not even tied directly into technical experience. The best programmer I ever came across was my first Ph.D. supervisor. He was a classicist, and had studied Latin and Greek at Oxford, before being recruited into a programming position in a company, and then later ending up in academia.

The Camel Has Two Humps

The paragraph I quoted above is the opening paragraph of the paper The Camel Has Two Humps dating back to 2006. I should point out that when I went to find the paper again to write this post, I discovered that it was retracted in 2014 by the second author after some sort of witch-hunt in which he had been suspended from his job over the paper. In the retraction paper, he states "we hadn't found that nature trumps nurture," which sounds more like something out of a sociology department than a science and technology school. Reading between the lines, he had to fall on his sword to get his job back.

But even the retraction paper emphasizes that the problem is real:

Widespread low achievement in undergraduate programming studies is well known and well researched (see, for example, (Lister et al., 2004; Tolhurst et al., 2006; Fincher et al., 2005; Simon et al., 2006)). The question of failure rates is less well researched and is controversial (Bennedsen and Caspersen, 2007). There have been all kinds of attempts to find predictors of performance, with little success; Robins et al. (2003) gives a summary.

If you accept that some people have an aptitude for programming, which I do, then it would be good to have a way to discern who is going to thrive and who is not. The alternative is to try and teach everyone to code and then have half the class fail out, which is both dispiriting and expensive. Discovering such a predictor for success has been elusive. A-level maths (roughly equivalent to AP Calculus in the US) is not a predictor, for example. Having a computer in your bedroom during high school is no predictor. It does not seem to depend on the country or culture that you are brought up in. 

 The original "Camel" paper above tells a fascinating story. A test was developed to get feedback on how much students had learned on their programming course, a test to be given at the end of the course. To give you a flavor of the exam, here is one question:

Through a mixup, this test was given to 30 students before they had received any programming instruction whatsoever. To cut a long story short (the paper is 21 pages long) it turned out that the results of this test were predictive of the results of the same type of test given at the end of the course. It's actually a bit more complicated than that but I'm summarizing a whole paper here. People who did poorly on the test never really learned to program, people who did well were successful, at least as measured by the test administered at the end.

The theory as to why this was working was that the people who would go on to learn to program had a consistent view (even if wrong) of what the code they were reading meant. Writing programs is very detail-oriented, so this would seem a good foundation. As reported:

  1. 44% of students formed a consistent mental model of how assignment works (even if incorrect!)
  2. 39% of students never formed a consistent model of how assignment works.
  3. 8% of students didn't give a damn and left the answers blank.

Some replication was done by other researchers, reported in the later "retraction" paper, the final paragraph of which is:

There wasn’t and still isn’t an aptitude test for programming based on Dehnadi’s work. It still appears to be true that novices who answer ‘consistently’ in the test are more likely to pass a programming course. Current work, by others, begins to suggest reasons for the phenomenon and open future research avenues.

Learn More

Start by reading the original paper The Camel Has Two Humps and Camels and Humps: A Retraction. And for a contrary position to the whole "learn to code" thing, try Please Don't Learn to Code. Or maybe learn plumbing?

 

Sign up for Sunday Brunch, the weekly Breakfast Bytes email.