
There will always be errors in data – and sometimes also in the analysis scripts: the best way to find and eradicate them is to make everything open. In science and beyond, there are some classic scare stories of what can happen when the analysis relies on spreadsheets: there’s even a European Spreadsheet Risks Interest Group. If that isn’t feasible, then the metrics team should be able to at least generate some dummy data sets, with scripts that would do the computations that convert the raw metrics into the flags that are used in TEF rankings.Īs someone interested in reproducibility in science, I’m all too well aware of the problems that can ensue if the pipeline from raw data to results is not clearly documented – this short piece by Florian Markowetz makes the case nicely. I can see that student data files are confidential, but once this information has been extracted and aggregated at institutional level, it should be possible to share it. cannot be made fully open due to data protection issues, as there is sensitive student information involved in the process.” But this seems disingenuous.
#Tefview x over speaker will not go away full
When I asked the TEF metrics team about this, I was told: “ The full process from the raw data in HESA/ILR returns, NSS etc. The problem, however, is that nowhere can you find a script that documents the process of deriving the final set of indicators from the raw data: if you try to work this out from first principles by following the HESA guidance on benchmarking, you run into the sand, because the institutional data is not provided in the right format. Track through the maze of links and you can also find an 87-page technical document of astounding complexity that specifies the algorithms used to derive the indicators from the underlying student data, DLHE survey and NSS data.
#Tefview x over speaker will not go away download
On this site, you can download tables of data and even consult interactive workbooks that allow you to see the relevant statistics for a given provider. When you visit the DfE’s website, the first impression is that it is a model of transparency. Problem 1: Lack of transparency and reproducibility This beginner’s guide provides more information. Discrepancies between obtained and expected values, either positive or negative, are flagged and used, together with a written narrative from the institution, to rate each institution as Gold, Silver or Bronze. Employment outcomes- what students do after they graduate, as measured by responses to the Destination of Leavers from Higher Education survey (DLHE)Īs detailed further below, data on the institution’s quality indicators is compared with the ‘expected value’ that is computed based on the contextual data of the institution.Continuation – the proportion of students who continue their studies from year to year, as measured by data collected by the Higher Education Statistics Agency (HESA).


Here I document the technical and statistical problems with TEF. In a previous post I questioned the rationale and validity of the Teaching Excellence and Student Outcomes Framework (TEF). Professor DV Bishop outlines the multiple flaws in the TEF methodology
