Hartley, James (1998) An Evaluation of Structured Abstracts in Journals Published by the British Psychological Society. (to appear in) The British Journal of Educational Psychology
An Evaluation of Structured Abstracts in Journals Published by the British Psychological Society


James Hartley
Department of Psychology, Keele University



Michele Benjamin
Journals Office, British Psychological Society


Key words: abstracts; scientific communication; writing; readability; evaluation; written communication.


Correspondence and requests for reprints:-

Professor James Hartley,
Department of Psychology,
Keele University,
ST5 5BG, UK.
E-mail j.hartley@keele.ac.uk


Background. In 1997 four journals published by the British Psychological Society - the British Journal of Clinical Psychology, the British Journal of Educational Psychology, the British Journal of Health Psychology, and Legal and Criminological Psychology - began publishing structured abstracts.


Aims. The aim of the studies reported here was to assess the effectiveness of these structured abstracts by comparing them with original versions written in a traditional, unstructured, format.


Method. The authors of articles accepted for publication in the four journals were asked to supply copies of their original traditional abstracts (written when the paper was submitted) together with copies of their structured abstracts (when the paper was revised). 48 such requests were made, and 30 pairs of abstracts were obtained. These abstracts were then compared on a number of measures.


Results. Analysis showed that the structured abstracts were significantly more readable, significantly longer, and significantly more informative than the traditional ones. Judges assessed the contents of the structured abstracts more quickly and with significantly less difficulty than they did the traditional ones. Almost every respondent expressed positive attitudes to structured abstracts.


Conclusions. The structured abstracts fared significantly better than the traditional ones on every measure used in this enquiry. We recommend, therefore, that the editors of other journals in the social sciences consider the adoption of structured abstracts.


Readers of this article will have noticed that the abstract that precedes it is set in a different format from the ones traditionally employed in academic journals. The abstract to this present article is said to have a structured format. Such structured abstracts contain subheadings - such as background, aims, methods, results and conclusions. From January 1997 four journals published by the British Psychological Society - the British Journal of Clinical Psychology (BJCP), the British Journal of Educational Psychology (BJEP), the British Journal of Health Psychology (BJHP) and Legal and Criminological Psychology (LCP) began to precede their articles with structured abstracts. This was a change of practice for BJCP and BJEP, and a new way of starting for BJHP and LCP.

Why did these journals adopt this practice? Two reasons suggest themselves. First of all, structured abstracts have replaced traditional abstracts in most medical journals and, in that context, they have been subjected to considerable research and evaluation suggesting that they are an improvement over traditional ones. It is likely that the editors of three of the four journals which have clinical content were familiar with these developments. Secondly, additional research conducted with structured abstracts written for psychology journals has also suggested the feasibility of this approach in this context.

These two sets of evaluation studies taken together have found that structured abstracts:

However, there have been some qualifications. Such abstracts:   · (still) sometimes omit important information (Froom and Froom, 1993; Taddio et al, 1994); and   · are often printed in confusing typographic layouts (Hartley and Sydes, 1996). Most of the research that has examined the quality of the abstracts has compared traditional abstracts published in journals before the advent of structured ones with the structured ones published at a later date in the same journals (e.g. see Taddio et al, 1994, Hartley and Sydes, 1997). McIntosh (1995), however, was able to compare structured abstracts with traditional ones when the abstracts were written by the same author(s). The traditional abstracts used in this study were those submitted for a forthcoming conference on paediatrics, and the structured abstracts were those that were subsequently requested by the conference organisers for the papers that were accepted. Hartley and Sydes (1997), using a subset of 29 pairs of abstracts from McIntosh’s study, found that in this case the structured abstracts had higher readability scores than did the traditional ones.

It is McIntosh’s approach that we have used in this paper. We were able to use the opportunity that arose from the fact that the British Psychological Society changed its publishing policy with respect to journal abstracts for the four journals listed above. Accordingly, we were able to write to the authors of papers accepted for publication in these journals for copies of the traditional abstracts that they had initially submitted, and for copies of the structured abstracts that had been requested to write with their revisions.

In the light of the previous research summarised above, we hypothesised that:


To obtain the necessary copies of the traditional and structured abstracts the first author of this paper wrote initially to the authors of papers in the first issues of the four relevant journals. He wrote next to Michele Benjamin in the Journals Office of the BPS to enquire whether or not she could let him know the names of the authors of articles that had been accepted for publication in subsequent issues. Finally he addressed this question specifically to the editor of the BJEP - who remains somewhat independent of the BPS Journals office.

In the event 48 letters were written to authors, and 30 replies - with the required pairs of abstracts - were received. (A further eight authors replied that they had not kept copies of their original abstracts). Thus, including these eight, the response rate was approximately 80%.

Unfortunately - as Table 1 shows - the distribution of the responses per journal was uneven, and the sample of abstracts was heavily weighted towards the BJEP. This was an outcome of the editor of the BJEP insisting that his contributors rework their abstracts if their articles were to be included in the 1997 volume. The other three editors were more liberal in their requirements, only requiring those authors who still had to make revisions to rewrite their abstracts and accepting traditonal abstracts from those authors who had already had their papers accepted for publication.


Table 1. The number of requests, and replies received, for pairs of abstracts.


                                        ;                             Requests                     Replies


Authors in Legal and Criminology                      6                                2


Authors in B. J. Clin. Psychol.                          10                                4


Authors in B. J. Health. Psychol.                         7                               5


Authors in B. J. Educ .Psychol.                         25                             19


Note: an additional 8 authors responded that they had not kept copies of their original traditional abstracts.


Furthermore, we need to note here that the four journals did not provide the same instructions for their authors. Thus the BJCP and BJHP asked for the following subheadings for their experimental articles:

and the following subheadings for review articles: whereas the BJEP asked for: and suggested some flexibility for authors of reviews.

LCP asked for:

and the following headings for review articles: The abstracts themselves were analysed in two main stages. In Stage 1, three text-based measures were made with word-processed versions of the abstracts. These comprised (i) a measure of word length and (ii) two computer-based measures of readability. The sub-headings were removed from the structured abstracts for all of these analyses. The measures were made with Grammatik 5 (a computer-based writing program). The two readability measures were the Flesch Reading Ease index and the Gunning Fog index. (Of these the Flesch is the more sensitive, but the Gunning is probably more appropriate for texts of this kind - see Sydes and Hartley, 1997).

In Stage 2, a reader-based evaluation checklist was devised to measure the information content of the abstracts. This checklist was based on one originally suggested by Narine et al (1991) and developed by Taddio et al (1994) but it evolved throughout our research, as more and more abstracts were assessed. A copy of the final version is included as Appendix 2. It can be seen that the respondent has to record ‘Yes/No/Not Applicable’ to a series of 22 questions. Each abstract is then awarded a score based on the number of ‘Yes’ decisions recorded.

In this study three judges completed an evaluation sheet for each abstract. One of the judges was the first author of this paper, and he completed sheets for both the traditional and the structured versions for each pair of abstracts. The other two judges for each abstract were drawn from a pool of first and second-year undergraduate psychology students who assessed either traditional or structured abstracts independently. (These students worked for 30 or 50 minutes each as part of departmental subject-pool requirements.) Fifteen students contributed to this judgement process. The resulting three evaluation sheets for each abstract were then compared and, if there was a discrepancy in the scoring for any item, the majority view was recorded.

Three other measures were also made in this study. Firstly, the students were timed unobtrusively whilst they completed their evaluation tasks. Secondly, a comparison was made between the number of discrepancies in judgements for the traditional and the structured abstracts. And thirdly, comments made by the authors about structured abstracts were tabulated. (These comments were taken from the replies to the original letter requesting copies of the abstracts.)


The main results of this enquiry are shown in Table 2. It can be seen, for all the measures listed in this table, that the structured abstracts were significantly different from the traditional ones. Thus they were significantly easier to read (as measured by these formulae), they were significantly longer, and they contained significantly more information. Appendix 1 provides the raw data upon which these summary statistics are based in order to give a richer picture of the findings.


Table 2. The main results from the study.


                                    Traditional     Structured         t             p

                                    format             format                         (one-tailed)


Flesch Reading     x    21.4                 24.6             2.81         p<0.005

Ease score*         s.d. 11.0                 10.0


Gunning Fog         x     20.3                 18.8             3.77         p<0.0005

Index*                   s.d.    3.3                   2.5


Length                   x    147.4                210.6           8.54         p<0.0005

(in words)             s.d.   47.5                  50.5


Evaluation             x        6.4                     9.1          6.04         p<0.0005

score                     s.d.    2.8                    2.6


*Note that a higher score on the Flesch scale indicates more readable text, whereas the reverse is true for the Fog index.


The data obtained by timing the students during the evaluation tasks were necessarily crude. The students doing the tasks varied from abstract to abstract, and some did more abstracts than others. Nonetheless, eight students worked on 60 traditional abstracts and took a total of approximately 245 mins (i.e. about 4 minutes for each one.) And five students worked on 43 structured abstracts and took a total of approximately 135 mins (i.e. about 3 mins. for each one.) These data suggest that it was easier to evaluate the structured abstracts than the traditional ones.

An examination of the number of discrepancies between the judges using the evaluation checklist allowed us to check whether or not there might be more discrepancies with the traditional abstracts - assuming that they were harder to read and to evaluate - than with the structured ones. One problem here, of course, was that the evaluation sheets had evolved - the light of the early discrepancies - as the study progressed, and some items were thus worded more precisely in the later versions. Be that as it may, the mean number of discrepancies found between the three judges each responding to the 22 questions per abstract was 4.8 (s.d. 1.8) for the traditional abstracts and 3.4 (s.d. 1.4) for the structured abstracts (t = 3.45, df 29, p < 0.005 one tail test). These data, too, support the notion that the structured abstracts were easier to judge.

Finally, the comments tabulated in Table 3 show that the great majority of the authors involved in this study had positive reactions to structured abstracts. Indeed, only one expressed a preference for the traditional format.


Table 3. A selection from the authors’ views about structured abstracts (Additional responses that replicate the ones included have been deleted.)


We tend to follow this format as a matter of course... I certainly feel that the structured approach is clearer and more informative.


I did find it useful to have such a clear structure to comply with when writing the abstract, and I also found it clearer and more readable than unstructured abstracts I have written for other journals.


My overall impression is that structuring the abstract will make it more informative to the reader - my experience of trying to glean a sense of what a paper is about before spending more time reading it in detail is that abstracts vary enormously in how informative they are, and structuring should make the process more efficient.


In terms of the differences between my two abstracts the structured one contained more information and was a more detailed summary of the research. The ordering of the information was better in the structured abstract, as in a traditional abstract you tend to start with participants, then the issues, the results and a comment. In the structured abstract putting the background and aims first helped to lead the reader into the study and assisted with the comprehension of the information. The aims of the study were more clearly defined in the structured abstract. The conclusion in the structured abstract helped to bring closure to the research, where I tended to leave this out in the old abstract. Visually it is easier to locate the information on the page in the structured abstract.


I found the structured abstract a little difficult to write. I seem to remember the main difficulty was with describing two different experiments within the headings. I ended up by grouping the headings together; this seemed to allow me to describe one experiment after the other more easily. I think that really the difficulty came with trying to re-organise bits of the original -- I’m sure that writing a structured abstract ‘from scratch’ would have been easier than revising an already existing one!


I find the structured abstract very useful for readers, indeed clearer and more informative, although it may seem more rigid for the writer in that some illustrative aspects may not fit properly with the headings. Nevertheless, I think it a good idea.


I found the structured abstract clearer and more informative. And it definitely gave a better sense of what was in the paper. It took longer to think about it (maybe this is a consequence of it being my first one). Staying within the word limit was difficult because the study involved so many different issues and it was difficult to summarise them all within 150 words.


When I was first asked to do a structured abstract I was puzzled by the request. As far as I was concerned my abstract was structured. Eventually, after a second request, I got the message and set about writing a structured abstract in accordance with the instructions I had previously been sent. Much to my surprise, my previous abstract did not contain all the information which was requested. Regardless of whether journals request a structured abstract or not I intend to lay out all of my abstracts in the manner suggested by you and used by the British Journal of Health Psychology.


My article involved constructing a mathematical model rather than reporting empirical results, so one might think that it was the least suitable type of article for an abstract structured under predetermined headings. ‘Purpose’ (I wasn’t given ‘Background’ and ‘Aims’) and ‘conclusions’ are perhaps universally applicable headings, no matter what type of article one is writing, but what are the ‘Methods’ and ‘Results’ of research that consists of mathematical modelling, for example? In fact I had no problem in finding a sensible solution. I interpreted the Method to be the method of solving the problem identified as the Purpose, and the Results to be the inferences that flowed from the model after it had been constructed. Perhaps it would be helpful to authors if some guidelines were drawn up with some permissive and liberal suggestions for interpreting the headings in non-standard cases like mine.


The structured abstract contains more information, but the information is also more sequentialized. It might also guide researchers’ reading habits in the wrong direction - away from the original paper and towards only reading the abstract. I must confess that I like the ‘old’ form best - it is more ‘tight’, and easier to remember as a cognitive unit. Structured abstracts must be remembered in five parts - almost the limits of short-term memory!


I definitely think that structured abstracts are for the better, and more informative than traditional abstracts. Structured abstracts will, I think, better facilitate peer review and be better understood.


Writing structured abstracts is more scientific than writing traditional ones. If authors include p values in their abstracts this will allow future researchers to carry out meta-analyses of the findings.




The main results of this enquiry are in accord with the previous results outlined in the Introduction. In this discussion, however, we want to raise a number of questions about each finding.


1. Changes in readability.

First of all we need to note that we have been looking at structured abstracts that were written after the traditional ones had been prepared. So we have not been examining structured abstracts ab initio. This means that the authors may have taken the opportunity to improve their abstracts when they were asked to reformulate them. Indeed, the findings of improved readability scores have only been reported when the same authors have produced both the traditional and structured abstracts (Hartley and Sydes, 1997). When traditional abstracts in earlier journals have been compared with later structured ones in the same journals, then no differences have (yet) been found in terms of readability (Hartley and Sydes, 1997). Nonetheless many readers think that structured abstracts are more readable (Hartley and Sydes, 1997) and indeed some of the comments of the authors listed in Table 3 support this idea.

Many readers, however, might question the value of using readability formulae to assess the readability of text. Indeed, there are several reasons for holding this view, particularly as these formulae were not devised for measuring such complex text. (See Hartley and Sydes, 1997 for further discussion.) Nonetheless, such measures do supply objective procedures that can be applied consistently to different texts - provided that the same computer package is used on all of them. (Different computer packages that provide ostensibly the same measure - e.g., the Flesch index - actually produce different results, and these differences are more marked with more complex text such as journal abstracts - Sydes and Hartley, 1997.) Thus by using the same ‘before’ and ‘after’ readability measures in this study we have some grounds for claiming that the texts of the structured abstracts were - on average - more readable.

However, as shown in the Appendix 1, there were wide variations. The Flesch scores, for instance, ranged from 0 - 44. (It is perhaps appropriate to note here that Flesch scores normally range from 0-100 and are subdivided as follows: 0-29 college graduate level; 30-49 13-16th grade (i.e. 18 years and over); 50-59 10-12th grade (i.e. 15-17 years) etc.) This mean Flesch score of 21 for the traditional abstracts and of 25 for the structured ones hardly compares well with the score of 25 recorded in an earlier study for 14 traditional abstracts in the BJEP (Hartley, 1994) - although it is in accord with a mean score of 19 recorded by Tenopir and Jacso (1993) for over 300 abstracts published in the American Psychological Association’s PsychLit. Our findings thus still indicate that journal abstracts are difficult to read, especially for students.

Finally, in this section, we may note that the overall improvements in readability were not found with the abstracts written for the BJCP (see Appendix 1). It may be then that the results will vary for different journals but, because so few data were recorded for the BJCP, this idea remains to be put to the test.


2. Changes in the accessibility of information

Earlier studies have suggested that structured abstracts are easier to search than traditional ones, both in printed and electronic versions (Hartley, Sydes and Blurton, 1996). This improvement probably results from the consistent format, and the sub-headings which help direct the reader’s search. In this present study it did appear that the structured abstracts were searched more quickly than the traditional ones, and that there were less discrepancies in the evaluations with the structured abstracts. These data then support the notion that the information in structured abstracts is more accessible to the readers than it is in the traditional format.


3. Changes in the length and in the information content.

In line with earlier research findings, this study too found that the structured abstracts were typically longer than traditional ones. In this study the structured abstracts were on average 33% longer than the traditional ones. (This compares with that of 50% reported in a Dutch study of medical journals cited by Haynes, 1993.)

With the extra text, of course, comes extra information. Inspection of the evaluation data shown in Appendix 1, for instance, shows that almost every abstract gained from being recast into a structured form in this respect.

Nonetheless, the gains were not always substantial. Part of the reason for this could be that, although more detail might be supplied in the structured abstract, relevant information might also have been included in the traditional one. Thus an abstract might gain a ‘Yes’ score in response to the question, ‘Is there an indication of how the study was carried out?’ if this information was brief (say in the traditional abstract) or more detailed (say in the structured one).

Analyses of the gains from the traditional to the structured formats showed that most increases in information lay under the subheading of ‘Participants’. Here 40% of the structured abstracts had higher scores than did the traditional ones. Next highest were gains for ‘Background’ (37%), ‘Aims’ (33%) and ‘Conclusions’ (33%). Details of ‘Method’ did improve (15%), but the ‘Results’ were little different from each other (7% gain). No abstract achieved all the 22 ‘Yes’ scores possible. (Of course, not all of these categories are appropriate to every abstract, and writers are limited by word length restrictions.) Taddio et al (1994) concluded that structured abstracts were more informative than traditional ones in medical journals but they too pointed out that certain items of information on their checklist were still sometimes omitted.

However, the fact that the typographic settings of structured abstracts are more spacious, together with the fact that these abstracts are typically longer has implications for journal editors and printers. If journal space is at a premium, and the journal circulation is large, then it might be expensive to introduce structured abstracts into printed journals. (This problem does not arise with electronic ones.) However, all four of the journals that we examined in this study allowed themselves the luxury of starting new articles on a fresh right-hand page, and thus there was considerable room for manoeuvre here. So the extra space required for structured abstracts in these journals was not a serious problem.

Concluding remarks

Abstracts in journal articles are an intriguing phenomenon. They encapsulate, in a brief text, the essence of the article that follows. And, according to the APA Publication Manual (1994) ‘a well-prepared abstract can be the most important paragraph in your article’ (p.8). As such journal abstracts are difficult to write, and they are difficult to read - especially if they are printed in a smaller type-size than the article that follows (Hartley, 1994).

Indeed, the nature of abstracts has changed over the years as more and more research articles have come to compete for their readers’ attention. Today readers have to skim and search far more than they had to do in the past, and the abstract is continually evolving as a gateway into the research literature. Huckin (1993) and Berkenkotter and Huckin (1995) have described how the physical format of journal papers has changed in order to facilitate searching and reading, and Table 4 is reprinted from Huckin’s discussion of abstracts in this context. This table shows how abstracts in scientific journal articles have been getting both longer and more informative. The move to structured abstracts undoubtedly continues this trend.


Table 4. Changes in the frequency, length and informativeness of abstracts published in a selection of science journals over time. (Data adapted with permission from Huckin, 1993.)


Year         No. of         Number of      Average number     Average number of

                Journals     abstracts         of sentences           results statements


1944             5                 11                     4.4                                 2.6


1959             7                 36*                     5.6                                4.7


1974             11               110                     6.0                                 4.7


1989             11                120                    6.5                                 5.0


*Includes one abstract with 39 sentences and 38 statements of results.


What structured abstracts appear to do is to force writers to be more explicit about the content of their abstract, and to do this in a systematic way. Recent studies in linguistics have shown that - in general - authors have much the same goals in writing abstracts in different subject disciplines (e.g., see Dos Santos, 1996; Liddy, 1991). Dos Santos (1996), for example writes: ‘A move analysis reveals that abstracts (in applied linguistics) follow a five-move pattern, namely: Move 1 motivates the reader to examine the research by setting the general field or topic and stating the shortcomings of the previous study; Move 2 introduces the research by either making a descriptive statement of the article’s main focus or presenting its purpose; Move 3 describes the study design; Move 4 states the major findings; and Move 5 advances the significance of the research by either drawing conclusions or offering recommendations.’ Thus it would appear that much of the material from traditional abstracts will find its ways into structured ones, but that with structured abstracts it is difficult to leave any of these ‘moves’ out or to vary their order. The resulting consistent format makes these abstracts easier to search in printed and electronic databases (Hartley et al, 1996).

In this paper we have evaluated the results of the decision to use structured abstracts made by four journals published by the British Psychological Society. By comparing the traditional abstracts written by the authors of articles in these journals with the structured ones that they were then required to submit, we have found that their structured abstracts were more readable, more accessible, longer, easier to search and more informative. Our findings are thus in accord with the previous research. Nonetheless, the results are not without their limitations. The sample of abstracts analysed was heavily weighted in favour of the BJEP, and there were discrepancies between the judges in using the evaluation sheets. Furthermore, the overall readability scores of the structured abstracts were still low.

Thus, although it appears that structured abstracts are to be preferred to traditional ones, there is still room for improvement. This might be achieved by editors providing authors with checklists and examples of good practice (Hartley, 1993). Meanwhile, we recommend that the editors of other journals published by the British Psychological Society - and indeed the editors of other journals in the social sciences - take up the option of publishing structured abstracts.


We are indebted to the staff at the BPS journals office for assistance with this enquiry, to the editors of the journals involved and, of course, to the authors who gave us permission to analyse their abstracts. 


American Psychological Association (1994). Publication Manual of the American Psychological Association, (4th edition). Washington, DC: American Psychological Association.

Berkenkotter, C.& Huckin, T. N. (1995). Genre Knowledge in Disciplinary Communication. Mahwah, N. J. : Erlbaum.

Dos Santos, M. B. (1996). The textual organisation of research paper abstracts in applied linguistics. Text, 16, (4), 481-499.

Froom, P. & Froom, J. (1993). Deficiencies in structured medical abstracts. Journal of Clinical Epidemiology; 46, (7), 591-594.

Grammatik 5 (1992). Reference Software International: 25 Bourne Court, Southen Road, Woodford Green, Essex, IG8 8HD.

Hartley, J. (1993). Improving the readability of scientific articles. British Journal of Educational Technology, 24, (3), 215-216.

Hartley, J. (1994). Three ways to improve the clarity of journal abstracts. British Journal of Educational Psychology, 64, (2), 331-343.

Hartley, J. & Sydes, M. (1995). Structured abstracts in the social sciences: presentation, readability and recall. R&D Report No 6211. Boston Spa: British Library.

Hartley, J. & Sydes, M. (1996). Which layout do you prefer? An analysis of readers' preferences for different typographic layouts of structured abstracts. Journal of Information Science, 22, (1), 27-37.

Hartley, J. & Sydes, M. (1997). Are structured abstracts easier to read than traditional ones? Journal of Research in Reading, 20, (2), 122-136.

Hartley, J., Sydes, M. & Blurton, A. (1996). Obtaining information accurately and quickly: are structured abstracts more efficient? Journal of Information Science, 22, (5), 349-356.

Haynes, R. B. (1993). More informative abstracts: current status and evaluation. Journal of Clinical Epidemiology, 46, (7), 595-597.

Haynes, R. B., Mulrow, C. D., Huth, E. J. et al. (1990). More informative abstracts revisited. Annals of Internal Medicine, 113, (1), 69-76.

Huckin, T. N. (1993). Surprise value in scientific discourse. Paper presented to the Ninth European Symposium on Language for Special Purposes, Bergen, Norway, August, 1993.

Liddy, E. D. (1991). The discourse-level structure of empirical abstracts: an exploratory study. Information Processing and Management, 27, (1), 55-81.

McIntosh, N. (1995) Structured abstracts and information transfer. R& D Report No 6142. Boston Spa: British Library.

Narine, L., Yee, D. S., Einarson, T. R. et al (1991). Quality of abstracts of original research articles in CMAJ in 1989. Canadian Medical Association Journal, 144, (4), 449-453.

Sydes, M. & Hartley, J. (1997). A thorn in the Flesch: observations on the unreliability of computer-based readability formulae. British Journal of Educational Technology, 28, (2), 143-145.

Taddio, A., Pain, T., Fassos, F. F. et al. (1994). Quality of nonstructured and structured abstracts of original research articles in the British Medical Journal, the Canadian Medical Association Journal and the Journal of the American Medical Association. Canadian Medical Association Journal, 150, (2), 1611-1615.

Tenopir, C. & Jacso, P. (1993). Quality of abstracts. Online, 17, (3), 44-55.

Trakas, K., Addis, A., Kruk, D., Buczek, Y., Iskedjian, M. & Einarson, T. R. (1997). Quality assessment of pharmacoeconomic abstracts of original research articles in selected journals. Annals of Pharmacotherapy, 31, (4), 423-438.

Appendix 1

The results for each abstract pair for the four journals.

(T = traditional format, S = structured format.)


Abstracts 01-02 Legal and Criminological Psychology

03-06 British Journal of Clinical Psychology

07-11 British Journal of Health Psychology

12-30 British Journal of Educational Psychology


Abstract               Flesch              Fog             Length             Evaluation

number                R.E.                  index           (words)            score

                            T. S.                  T. S.            T. S.                 T. S.


01*                     8  12                    22  22           64  102              2  3

02                     27  25                   18  19          119  237              5  9


03                       0  14                   29  22          169  243              6  7

04                      24  26                   22  21          242  293             8 11

05*                     29  36                  18  16             47  149            0    5

06                         0   8                  29   24             55  119            3   6


07**                    15  15                   21  21            221 221         14   14

08                        29  29                  17  16            120 159            9  11

09*                       26  20                  22  21              97 136             5   6

10+                      28   27                 19 18             178  210             4  7

11+                      26    24                18 17             155   209            6   7


12                        11   14                  24  22           215  323           10   10

13                        29   33                  19 19           150   210             6     7

14                        27   31                  20  19           166  163             9     9

15                        26   14                  18  20           139   296            9    13

16                        13   24                  21  19           132   181            5    10

17                        18   34                  21   15           148   227            5    10

18+                      20   21                  22   22           133   193            6     8

19                        32    36                 18   17            152  155             8     8

20                        35    43                  17  14            191   251            7     9

21                        42   42                   16   16            125  251             4    7

22                        28   25                   20   19            131   219            5    9

23++                      0     4                   25   22            225  196           10    9

24                           8   25                   24  18            141    222            7    9

25                           8   8                     20   21           208    241            6    8

26                         32   37                    17  15           112    198          11   11

27                        15    21                    19  19           108    201            6    13

28                         25  29                     17   16           170   263            6    15

29                         25    31                    18  18            131   205           4    11

30+                       35    30                    18   17           177   245           7    10

Mean                    21    25                     20  19            147   210         6.4   9.1

S.d.                      11    10                     3.3 2.5            48       50         2.8   2.6


* literature review/non experimental paper (e.g. analysis of statistical data)

** these two abstracts were the same. The author inserted headings into his original text.

+ two studies reported

++ the only case where the structured abstract scored less on the evaluation score than the traditional one. A comment on the need for further research in the traditional abstract was not included in the structured version.


Appendix 2
Abstract evaluation form (final version)


Number/Title of abstract__________________________________



Please write Yes / No / N/A (Not Applicable) to each of the following items:




Is there any mention of previous research or research findings on

___ this topic?



___ Is there any indication of what the aims/purpose of this study were?

___ Is a hypothesis (or hypotheses) provided?



___ Is there any indication of in which country this study took place?



Is there any more specific information on where the participants came

___ from?

___ Is there any information on the numbers of participants?

___ Is there any information on the sex distribution of the participants?

___ Is there any information on the ages of the participants?

___ Is there any information on the ability of the participants?

Is there any information on the socio-economic status of the

___ participants?



___ Is there any information on how the study was carried out?

Is there any information on how the participants were allocated to

___ different conditions?

___ Is there any information on what measures were used?

___ Is there any information on how long the study took?



___ Are the main results presented verbally in the abstract?

___ Are actual numbers from the results presented in the abstract?

___ Are the results said to be statistically significant?

___ Are levels of significance reported numerically?



___ Are any conclusions drawn?

___ Are any limitations of the study mentioned?

___ Are any implications drawn?

Does the abstract indicate that suggestions for further research are

___ included in the paper?