English-Corpora.org

1. Who created these corpora?

The underlying corpus architecture and web interface were created by Mark Davies, (retired) Professor of Linguistics. In most cases, he also designed, collected, edited, and annotated the corpora as well. In the case of the BNC, Strathy, EEBO, and Hansard corpora, I received the texts from others, and "just" created the architecture and interface. So although I use the terms "we" and "us" on this and other pages, most activities related to the development of most of these corpora were actually carried out by just one person.

2. Who else contributed?

Multiple corpora

The Corpus del Español, the Corpus do Português, and the new Corpus of Historical American English were funded by large grants from the National Endowment for the Humanities.

Multiple corpora

Paul Rayson provided the CLAWS tagger, which was used for all of the English corpora.

COCA

Some BYU students helped to scan a few of the novels.

COHA

Several BYU students helped to scan novels, magazines, and non-fiction books, and to help process and correct the files and lexicon.

   

Google Books

Based on the datasets from Google Books

BNC

The original texts were licensed for re-use from Oxford University Press.

Strathy

The textual corpus was designed and created at the Strathy Language Unit at Queen's University in Canada.

Hansard, EEBO

The vast majority of the work on the corpus (including semantic tagging) was done by other participants in the SAMUELS project. I simply created the corpus architecture and interface.

3. What is the advantage of these corpora over other ones that are available?

For some languages and time periods, these are really the only corpora available. For example, in spite of earlier corpora like the American National Corpus and the Bank of English, our Corpus of Contemporary American English is the only large, balanced corpus of contemporary American English. In spite of the Brown family of corpora and the ARCHER corpus, the Corpus of Historical American English is the only large and balanced corpus of historical American English. And while the ICE corpora are useful for looking at dialectal variation in English, the GloWbE corpus is about 100 times as large (and somewhat more diverse). Beyond the "textual" corpora, however, the corpus architecture and interface that we have developed allows for speed, size, annotation, and a range of queries that we believe is unmatched with other architectures, and which makes it useful for corpora such as the British National Corpus, which does have other interfaces. Also, they're free -- a nice feature.

4. What software is used to index, search, and retrieve data from these corpora?

We have created our own corpus architecture, using Microsoft SQL Server as the backbone of the relational database approach. Our proprietary architecture allows for size, speed, and very good scalability that we don't believe are available with any other architecture. Even complex queries of the more than 600 million word COCA corpus or the 400 million word COHA corpus typically only take two or three seconds (and not much more for the 14 billion word iWeb corpus). In addition, because of the relational database design, we can keep adding on more annotation "modules" with little or no performance hit. Finally, the relational database design allows for a range of queries that we believe is unmatched by any other architecture for large corpora.

5. How many people use the corpora?

As measured by Google Analytics, as of October 2014 the corpora are used by more than 130,000 unique people each month. The most widely-used corpus is the Corpus of Contemporary American English -- with more than 65,000 unique users each month. And people don't just come in, look for one word, and move on -- average time at the site each visit is between 10-15 minutes. (More information...)

6. What do they use the corpora for?

For lots of things. Linguists use the corpora to analyze variation and change in the different languages. Some are materials developers, who use the data to create teaching materials. A high number of users are language teachers and learners, who use the corpus data to model native speaker performance and intuition. Translators use the corpora to get precise data on the target languages. Other people in the humanities and social sciences look at changes in culture and society (especially with COHA and Hansard). Some businesses purchase data from the corpora to use in natural language processing projects. And lots of people are just curious about language, and (believe it or not) just use the corpora for fun, to see what's going on with the languages currently. To get a better idea of what people are doing with the corpora, check out (or search through) the entries from the Researchers page.

7. What about copyright?

Our corpora contain hundreds of millions of words of copyrighted material. The only way that their use is legal (under US Fair Use Law) is because of the limited "Keyword in Context" (KWIC) displays. It's kind of like the "snippet defense" used by Google. They retrieve and index billions of words of copyright material, but they only allow end users to access "snippets" of this data from their servers. Click here for an extended discussion of US Fair Use Law and how it applies to our COCA texts.

8. Can I get access to the full text of these corpora?

Downloadable, full-text data is now available for the following corpora: iWeb, COCA, COHA, GloWbE, NOW, Wikipedia, SOAP, the TV corpus, the Movie corpus, and the Corpus del Español.

9. Is there API access to the corpora?

No, there isn't. There are two main reasons for this. First, we don't have copyright access to the texts in the corpora, and so we can only provide limited access to the corpora, via the corpus interface. Second, we're already pretty "maxed out" in terms of the one corpus server, and API access would probably lead to quite a bit more queries, which we can't handle right now. Although we don't allow API access, some people have programmed browsers (via Python or C++ or whatever) to allow for semi-automated queries (note, though, that we don't provide tech support for this).

10. My access limits (for "non-researcher") are too low. Can I increase them?

"Non-researchers" (Level 1) have 50 queries a day, or about 1,500 queries per month. For most people, this is way more than enough. But if you really do need more than 1,500 queries per month, then you might want to upgrade to a premium account, which also helps to support the corpora, in which case you will have 200 queries a day.

11. My organization doesn't list my name on a web-page. Can I still register to use the corpora?

You do not need to register as a "researcher" to use the corpora. Even the lowest level, default "non-researcher" status gives you 50 queries a day, or about 1,500 queries per month. For most people, this is way more than enough. The only downside is that you won't be included on the list of researchers, but that's not a huge deal. On the other hand, if you really do need more than 1,500 queries per month, then you might want to upgrade to a premium account, which also helps to support the corpora, in which case you will have 200 queries a day (6,000 per month).

12. I want more data than what's available via the standard interface. What can I do?

Users can purchase offline data -- such as full text copies of the texts, frequency lists, collocates lists, n-grams lists (e.g. all two or three word strings of words). Click here for much more detailed information on this data, as well as downloadable samples.

13. Can my class have additional access to a corpus on a given day?

There is a limit of 250 queries per 24 hours for a "group", where a group is typically a class of students or a department at a university. If you need more queries than this, you'd want an academic / site license..

14. Why have you started offering academic licenses and premium accounts?

There are a number of reasons for the academic licenses and "premium" accounts, which were introduced in early 2015. One is to provide income for the corpora. The creator and administrator of the corpora will be retiring in Summer 2020, and there needs to be some viable model for financial sustainability of the corpora beyond that date.

15. I don't want to see the messages that appear every 10-15 searches as I use the corpora.

If you have a premium account, you won't see these messages anymore (during the year in which your premium account is valid, if it is for a full year: $30).

If you just want a basic account and are really bothered by the messages, you might want to consider other web-based corpora -- like those from Lancaster University (including BNCweb), CorpusEye, or the many excellent corpora from Sketch Engine. (Please be aware, though, that the subscription fee for the Sketch Engine corpora is somewhat more expensive than the cost of a premium account for the corpora.)

16. How do I cite the corpora in my published articles?

Please use the following information when you cite the corpus in academic publications or conference papers. Thanks.

COCA

Davies, Mark. (2008-) The Corpus of Contemporary American English (COCA): One billion words, 1990-2019. Available online at https://www.english-corpora.org/coca/.

iWeb

Davies, Mark. (2018-) The 14 Billion Word iWeb Corpus. Available online at https://www.english-corpora.org/iWeb/.

COHA

Davies, Mark. (2010-) The Corpus of Historical American English (COHA): 400 million words, 1810-2009. Available online at https://www.english-corpora.org/coha/.

TIME

Davies, Mark. (2007-) TIME Magazine Corpus: 100 million words, 1920s-2000s. Available online at https://www.english-corpora.org/time/.

TV

Davies, Mark. (2019-) The TV Corpus: 325 million words, 1950-2018. Available online at https://www.english-corpora.org/tv/.

Movies

Davies, Mark. (2019-) The Movie Corpus: 200 million words, 1930-2018. Available online at https://www.english-corpora.org/movies/.

BNC

Davies, Mark. (2004-) British National Corpus (from Oxford University Press). Available online at https://www.english-corpora.org/bnc/.

NOW

Davies, Mark. (2016-) Corpus of News on the Web (NOW): 10 billion words from 20 countries, updated every day. Available online at https://www.english-corpora.org/now/.

GloWbE

Davies, Mark. (2013) Corpus of Global Web-Based English: 1.9 billion words from speakers in 20 countries (GloWbE). Available online at https://www.english-corpora.org/glowbe/.

EEBO

Davies, Mark. (2017) Early English Books Online. Part of the SAMUELS project. Available online at https://www.english-corpora.org/eebo/.

Hansard

Davies, Mark. (2015) Hansard Corpus. Part of the SAMUELS project. Available online at https://www.hansard-corpus.org/.

Wikipedia

Davies, Mark. (2015) The Wikipedia Corpus: 4.6 million articles, 1.9 billion words. Adapted from Wikipedia. Available online at https://www.english-corpora.org/wiki/.

SOAP

Davies, Mark. (2011-) Corpus of American Soap Operas: 100 million words. Available online at https://www.english-corpora.org/soap/.

CORE

Davies, Mark. (2016-) Corpus of Online Registers of English (CORE). Available online at https://www.english-corpora.org/core/.

Corpus del Español

Davies, Mark. (2016-) Corpus del Español: Two billion words, 21 countries. Available online at http://www.corpusdelespanol.org/web-dial/. (Web / Dialects)

Davies, Mark. (2002-) Corpus del Español: 100 million words, 1200s-1900s. Available online at http://www.corpusdelespanol.org/hist-gen/. (Historical / Genres)

Corpus do Português

Davies, Mark. (2016-) Corpus do Português: One billion words, 4 countries. Available online at http://www.corpusdoportugues.org/web-dial/. (Web / Dialects)

Davies, Mark and Michael Ferreira. (2006-) Corpus do Português: 45 million words, 1300s-1900s. Available online at http://www.corpusdoportugues.org/hist-gen/. (Historical / Genres)

Google Books

Davies, Mark. (2011-) Google Books Corpus. (Based on Google Books n-grams). Available online at http://www.english-corpora.org/googlebooks/. Based on:
Jean-Baptiste Michel*, Yuan Kui Shen, Aviva Presser Aiden, Adrian Veres, Matthew K. Gray, The Google Books Team, Joseph P. Pickett, Dale Hoiberg, Dan Clancy, Peter Norvig, Jon Orwant, Steven Pinker, Martin A. Nowak, and Erez Lieberman Aiden*. Quantitative Analysis of Culture Using Millions of Digitized Books. Science 331 (2011) [Published online ahead of print 12/16/2010].

WordAndPhrase

Davies, Mark. (2011-) WordAndPhrase (based on data from the COCA corpus). Available online at https://www.wordandphrase.info.

Word frequency data

Davies, Mark. (2011-) Most frequent 100,000 word forms in English (based on data from the COCA corpus). Available online at https://www.wordfrequency.info/.

N-grams data

Davies, Mark. (2011-) English n-grams (based on data from the COCA corpus). Available online at https://www.ngrams.info/.

In the first reference to the corpus in your paper, please use the full name. For example, for COCA: "the Corpus of Contemporary American English" with the appropriate citation to the references section of the paper, e.g. (Davies 2008-). After that reference, feel free to use something shorter, like "COCA" (for example: "...and as seen in COCA, there are..."). Also, please do not refer to the corpus in the body of your paper as "Davies' COCA corpus", "a corpus created by Mark Davies", etc. The bibliographic entry itself is enough to indicate who created the corpus. Otherwise, it just kind of sounds strange and overly proprietary.