Has Mr. Piketty acknowledged any errors?

Not really. Mr. Piketty concedes no outright errors in his original work, though he acknowledges that many of his choices can and should be subject to debate and are worth refining further.

And he says he could have done a better job disclosing his methods and reasons for data adjustments. For example, in addressing one piece of early 20th-century Swedish inequality data that Mr. Giles argued was in error, Mr. Piketty wrote, “I agree that this adjustment should have been made more explicit in the technical appendix and Excel file.”

What broader lessons can be drawn from this controversy about the nature of social science, historical research and the search for truth?

Quite a few! But this is the biggest one: The work by Mr. Piketty and others trying to study economic history is challenging for a lot of reasons, not least that good economic data is generally unavailable for anything more than the most recent few decades. So researchers must use whatever sources available, frequently old tax filings, to try to come up with some estimate of how things were in an earlier era.

The problem is, to make that data useful — and particularly to make it comparable to more recent data that is collected in a rigorous and transparent way — scholars have to make hundreds of adjustments to account for various factors that could throw off the numbers. To cite one of many adjustments that was at issue in the recent controversy: If you want to know what the level of wealth inequality was in 1930s France based on estate tax data, you must use some mechanism to deal with the fact that the people paying the estate tax are, well, dead, and probably don’t precisely line up with the wealth trends of all people who were alive at the time.

Because scholars must make countless assumptions to find useful data, there are countless opportunities for either conceptual error or willful manipulation. In that sense, the casual reader is trusting the researcher to make those judgments in a consistent, logical way that is not intended to tilt the data one way or another.

Any other points?

Maybe that people doing heavy-duty social science might consider using programming languages that allow more clearly disclosed and notated series of steps that outside researchers can more easily check and second-guess, instead of Microsoft Excel or other simple spreadsheets. Here’s our own Austin Frakt arguing just that. That said, Mr. Frakt notes that Mr. Piketty’s analysis is ultimately relatively simple mathematically, and so using simple spreadsheets — and then opening them up for the world to see and second-guess — may have been the best way to go.