U-5_Health Info Resource Comparison

1,624 views

Published on

Published in: Health & Medicine, Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
1,624
On SlideShare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
2
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

U-5_Health Info Resource Comparison

  1. 1. Critical Evaluation and Comparison of Two Internet Public Health Information ResourcesABSTRACTOBJECTIVE: To determine which of two Websites, HealthInsite and eMedicine Consumer Health is the better Internet publichealth information resourceDESIGN: Pilot study of 10 Websites to select 2 finalists; objective comparison of the two final sites and their breast cancerinformation content, using Minervation and Net Scoring benchmarking tools, and a manual and online readability testsDATA SOURCES: Key features from all the WebsitesMAIN OUTCOME MEASURES: Accessibility, Usability, Reliability and Readability of the sitesRESULTS: All figures are for HealthInsite vs. eMedicine. With Minervation tool, Accessibility was 88.9% vs. 54%; Usability83.3% vs. 72.2%; Reliability 85.2% vs. 51.8%; Overall score was 86.1% vs. 60.4% . With Net Scoring the corresponding scoreswere 50% each (Accesibility); 74.4% vs. 70.6% (Usability); 65.1% vs. 52.7% (Reliability); 68.7% vs. 60.5% (Overall).Readability scores were 43.1 vs. 47 (FRE) (p=0.99); 11.6 vs. 10.7 (FKGL) (p=0.98); 9.7 vs. 8.2 (Fog). With online readabilitytool, the scores were 61.8 vs. 61.3 (FRE); 8.9 vs. 8.7 (FKGL); 12.2 vs. 11.8 (Fog)CONCLUSION: As a patient/public health information resource, HealthInsite was better overall. Both HealthInsite and eMedicinefailed to meet UK government requirements. Quality benchmarking tools and readability tests/formulae are not perfect and lackconformity amongst themselves. The task of benchmarking and measuring readability is rigorous and time-consuming.Automating both processes through a comprehensive tool may aid the human experts in performing their task more efficiently.Key words: Quality benchmarking tools; Readability tests(The following document’s FRE=28.6 and FKGL=12)
  2. 2. INTRODUCTIONNinety-five million Americans use Internet for health information.1 There were >100,000 medical Websites in 1999, andincreasing phenomenally.2 These give rise to some cogent questions begging for urgent answers. How much of the information isuseful, genuine or usable to the public? What impact does it have on them?3,4 What benchmarking tools to use to assess theauthenticity/reliability/validity of online information? How to improve the benchmarking process?This essay considers these inter-related issues. We have critically compared/contrasted two public/patient Internet healthinformation resources from two regions, from a public/patient’s and a specialist’s perspective. We selected breast cancer becauseit is the most common cancer in women, kills 400,000 annually, and can strike early;5,6 it is the biggest cause of cancer deaths inAustralian women, and second biggest cause in US and Britain;6,7 it is one of the most common health-related search topicsamong Internet users;8 and finally, the author of this essay manages the Breast Clinic in the Seychelles Ministry of Health.MATERIALS AND METHODSDownloading/Installing HONcode ToolbarThe HONcode9,10 Accreditation Search and Verification Toolbars software was downloaded and installed in our browser (IEVersion-6.0.2800.1106) Explorer Bar, through a series of HONcode 1.2Setup wizard dialogue boxes. (Figures-1,2) Figure-1: HON and HONcode logos Figure-2: Screenshot of HONcode 1.2 Setup wizard boxWe installed the automatic HONcode accreditation status indicator on the Toolbar (View menu→Toolbar option). We did notinstall HONcode search box because it was slowing down the opening of our browser. Right-clicking on some highlighted textand selecting ‘HONcode search’ indicated the site’s accreditation status.11Piloting 10 sites Figure-3: C-H-i-Q logo Next, a pilot study was conducted on ten Websites (selected from Internet in Health and Healthcare12 and from Internet survey) to finalise two for evaluation/comparison. Patient/public-oriented resources and some professional ones were included (Appendix-Box-A). The Centre for Health Information Quality (C-H-i-Q)13 (Figure-3) checklist was applied to each site. The parameters were scored on a scale from 0 (not present) to 5 (fully present). We determined the Web Impact Factor (WIF)8,14 from the results returned by AltaVista (http://www.altavista.com/; accessed 18 June 2005), by entering ‘link:URL - host:URL’ in the search box, after selecting ‘search Worldwide’ option. Someadditional points, including HONcode status were also included, with a score of 0 (not present) or 1 (present).(Appendix-Box-1,Appendix-Table-1)HealthInsite / eMedicine Analysis and ComparisonWe applied two quality benchmarking tools to the two finalists, HealthInsite (Australian) and eMedicine (American), to comparetwo resources from different regions. The two benchmarking tools were:1. A tool from Minervation (LIDA Instrument version1.2) (Figure-4): This tool assesses a site on three Levels; Accessibility,Usability and Reliability, which are further subdivided into sub-levels and sub-sub-levels (Appendix-Box-2).15 Figure-4: Minervation homepageFor Accessibility (Level-1) we went to www.minervation.com/validation (Accessed 12 June 2005) and entered the respectiveURLs (HealthInsite, eMedicine) in the designated box. The validation tool generated answers to first 4questions. For all theremaining questions we viewed the sites normally and entered the appropriate scores. Each question was scored on a scale of 0-3,MSc Healthcare Informatics RCSEd+Univ of Bath; Unit 5-Remote Healthcare; 2005. Tutor: MNK Boulos; Student: Sanjoy Sanyal 2
  3. 3. where 0=Never; 1=Sometimes; 2=Mostly; 3=Always. The supplemental questions in Reliability (Level-3) were not consideredsince they required contacting the site producers.152. Net Scoring, a French quality benchmarking tool, was applied next (Figure-5). This has 49 criteria grouped into 8 categories;Credibility, Content, Hyperlinks, Design, Interactivity, Quantitative aspects, Ethics, and Accessibility. Each criterion is classifiedas essential (0-9), important (0-6), or minor (0-3); (Maximum=312points).16 All categories were used for evaluation exceptQuantitative, and one important criterion under Hyperlinks, which were not applicable to us. Therefore our assessment was on amaximum of 294(312minus18) points. Figure-5: Central Health/Net Scoring logosReadability scoringBreast cancer contents of each site were compared by means of readability indices. For consistency, specific breast cancer topicswere selected8.(Table-3) The Readability tests were Flesch Reading Ease (FRE)17, Flesch-Kincaid Grade Level (FKGL)17, Gunnings Fog Index18, and an online readability tool that automatically generated Kincaid, ARI (Automated Readability Index), Coleman-Liau, Flesch, Fog, Bjornsson’s Lix and McLaughlins SMOG scores19(Appendix-Boxes-3,4,5) Microsoft® Word has in-built facility to give the FRE and FKGL scores. The ‘Tools’ menu in the MSWord 2003 was configured as outlined in Appendix-Box-5a, Figure-6.17 Figure-6: Screenshot of Tools menu Options dialogue boxFRE and FKGL scores: Text from the documents was copied in clipboard and pasted in Microsoft® (Redmond, WA) Word2003.Each document was meticulously ‘processed’ as per Pfizer guidelines (viz. headings/titles, page navigation, bullets,references/URLs removed; hyphenated words/proper nouns included; footnotes excluded); only the main text body was used.17,20On running spellchecker in MSWord2003, after it finished checking, it displayed statistics about the document and FRE/FKGLscores.17 Mean, Standard Deviation, Variance and Probability associated with Student’s t test were computed in MSExcel2003.Fog Index: In the absence of software,21,22 we calculated Fog Index ‘manually’, as outlined in Appendix-Box-5.18 We counted allthe words with >3 syllables according to Pfizer guidelines (saying the word aloud with a finger under chin; each chin dropcounted as a syllable).20Online readability tool: For further readability check, the documents were uploaded onto an automated online readability tool(http://www.readability.info/uploadfile.shtml; accessed 12 June 2005). The instrument converted the document into plain text andgenerated the scores.19RESULTSHONcode toolbar installationThe HONcode Status and Search icons were installed on the Explorer Bar and the accreditation status indicator in the Toolbar.The latter automatically displayed the HONcode accreditation status of a Website (Figures-7a,b). : Figure-7a HONcode Status Search icons installed and accreditation status HONcode Search icon Accreditation status HONcode Status iconMSc Healthcare Informatics RCSEd+Univ of Bath; Unit 5-Remote Healthcare; 2005. Tutor: MNK Boulos; Student: Sanjoy Sanyal 3
  4. 4. Figure-7b: Accreditation statusPilot study resultsThe top scorers were HealthInsite (54), NHSDirect and eMedicine (50 each) (Box-2, Appendix-Table-1). Only MedlinePlus,healthfinder®, HealthInsite and eMedicine were HONcode-accredited. NHSDirect and NeLH carried the NHS seal. MedlinePlusbreast cancer page was not HONcode-accredited. Box-2: Scores of Websites in pilot study -HealthInsite (Australia) – 54 -NHS Direct Online (UK) – 50 -eMedicine Consumer Health (USA) – 50 -MedlinePlus (USA) – 48 -Healthfinder® (USA) – 42 -NIHSeniorHealth (USA) – 33 -NeLH/NLH (UK) – 32 -DIPEx (USA) – 28 -Cochrane Library – 26 -HealthConnect (Australia) – 2HealthConnect, under re-development, had no breast cancer search results. Cochrane Library and NeLH/NLH had insufficientpublic material. AltaVista search results for NeLH/NLH, were 43,300/255 respectively. DIPEx breast cancer search returned onlysubjective Interview Transcripts rather than objective information. NIHSeniorHealth had features typically suited for the elderly.MedlinePlus and healthfinder® were comparable, but breast cancer information was more systematically arranged in the former.The three top-scorers in the pilot, NHSDirect, HealthInsite and eMedicine had almost comparable features. NHSDirect had thehighest results (231,000) from AltaVista search.(Figures-8-16) Figure-8: HealthConnect breast cancer search result NHS Seal Figure-9-: NeLH breast cancer search page, patient information Figure 10: NIHSeniorHealth breast cancer search pageMSc Healthcare Informatics RCSEd+Univ of Bath; Unit 5-Remote Healthcare; 2005. Tutor: MNK Boulos; Student: Sanjoy Sanyal 4
  5. 5. Figure 11: DIPEx breast cancer search page12 13 Figures-12: MedlinePlus homepage; 13: Breast cancer information NHS seal Figure 14: NHS Direct Online homepage; highest link popularity Figure-15: HealthInsite homepage Figure-16: eMedicine Consumer Health homepageBenchmarking resultsWith Minervation tool, HealthInsite secured 86.1% against eMedicine’s 60.4% (Table-1, Appendix-Tables-2,3).Table-1: Results with Minervation toolMSc Healthcare Informatics RCSEd+Univ of Bath; Unit 5-Remote Healthcare; 2005. Tutor: MNK Boulos; Student: Sanjoy Sanyal 5
  6. 6. HealthInsite eMedicineLevel-1 (Accessibility) (Maximum=63points)First four automated tests (Maximum=57points) 50 28Browser test (Maximum=3points) 3 3Registration (Maximum=3points) 3 3 Subtotal (% of 63) 56 (88.9%) 34 (54%)Level-2 (Usability) (Maximum=54points)Clarity (6 questions; maximum=18points) 15 12Consistency (3 questions; maximum=9points) 8 9Functionality (5 questions; maximum=15points) 13 13Engagibility (4 questions; maximum=12points) 9 5 Subtotal (% of 54) 45 (83.3%) 39 (72.2%)Level-3 (Reliability) (Maximum=27points)Currency (3 questions; maximum=9points) 9 3Conflicts of interest (3 questions; maximum=9points) 9 6Content production (3 questions; maximum=9points) 5 5 Subtotal (% of 27) 23 (85.2%) 14 (51.8%) Grand total (% of 144) 124 (86.1%) 87 (60.4%)With Net Scoring, HealthInsite scored marginally better (68.7%) than eMedicine (60.5%) (Table-2; Appendix-Tables-4,5,6).Table-2: Results with Net Scoring HealthInsite eMedicineContent category (Maximum=87points) 64(74.7%) 46 (52.9%)Credibility category (Maximum=99points) 55(55.5%) 52(52.5%)Hyperlinks (Maximum=45points) (minus 6points-see text) 31(79.5%) 29(74.4%)Design (Maximum=21points) 15(71.4%) 16(76.2%)Accessibility (Maximum=12points) 6(50%) 6(50%)Interactivity (Maximum=18points) 13(72.2%) 11(61.1%)Ethics (Maximum=18points) 18(100%) 18(100%)Grand total (% of 294) 202(68.7%) 178(60.5%)Figure-17 graphically represents the total scores from two sites by the two benchmarking tools. Tw o sites vs. tw o tools 100.00% 90.00% 80.00% 70.00% 60.00% 50.00% 40.00% 30.00% 20.00% 10.00% 0.00% HealthInsite Minervation tool Net Scoring eMedicine Figure-17: 2 x2 Comparison [Two sites on basis of two benchmarks]Readability resultsThe results of MSWord (FRE, FKGL) and manual technique (Fog) are summarized in Boxes-3,4; Table-3; Figures-18,19;embedded MSExcel2003 worksheets-1,2. Box-3: HealthInsite Fog Index Box-4: eMedicine Fog Index Words/Sentences = 5198 / 237 = 21.93 Words/Sentences = 3319 / 196 = 16.93 [Words>3 syllables / Words] x 100 [Words>3 syllables / Words] x 100 = [126 / 5198] x 100 = 2.42 = [124 / 3319] x 100 = 3.73 [21.93 + 2.42] x 0.4 = 9.74 [16.93 + 3.73] x 0.4 = 8.26Table-3: Readability Statistics HealthInsite eMedicineText FRE FKGL Fog FRE FKGL FogBreast cancer overview / facts and figures 55 9.9 56.2 8.8Breast cancer causes / risk factors 55.6 8.9 48.2 11Tests for breast cancer / mammography 49.7 10.4 51.9 9.7Treatment options for breast cancer 38.1 12 41.1 11.7MSc Healthcare Informatics RCSEd+Univ of Bath; Unit 5-Remote Healthcare; 2005. Tutor: MNK Boulos; Student: Sanjoy Sanyal 6
  7. 7. Support / follow up for women with breast cancer 40.8 11.7 41.5 11.6Combined text 43.1 11.6 9.74 47.0 10.7 8.26Mean μ=( ∑A-E / 5 ) 47.84 10.58 47.78 10.56Std Deviation (SD) σ= 8.05 1.28 6.56 1.26Variance 51.87 1.32 34.42 1.28Figure-18a: HealthInsite Readability Indices screenshot Figure-18b: eMedicine Readability Indices screenshot Flesch Reading Ease HealthInsite eMedicine 55 56.2 55.6 48.2 49.7 51.9 38.1 41.1 40.8 41.5 Mean 47.84 47.78 SD 8.052515135 6.559496932 Variance 51.8744 34.4216 Probability associated with a Students t test 0.990022434 Sheet-1: Comparison of FRE scores. No (2-tailed distribution, unpaired 2-sample with statistical difference. Double click anywhere unequal variance ) on table to get MSExcel Worksheet Flesch-Kincaid Grade Level HealthInsite eMedicine 9.9 8.8 8.9 11 10.4 9.7 12 11.7 11.7 11.6 Mean 10.58 10.56 Standard Deviation 1.283354978 1.266096363 Variance 1.3176 1.2824 Probability associated with a Students t test 0.980816678 (2-tailed distribution, unpaired 2-sample with Sheet-2: Comparison of FKGL scores. No statistical difference. Double click anywhere unequal variance ) on table to get MSExcel WorksheetThe results from the online readability tool are summarized in Table-4, Figure-19Table-4: Online readability resultsTest/Formula HealthInsite eMedicineKincaid 8.9 8.7ARI 10.3 9.7Coleman-Liau 13.1 12.9Flesch Index 61.8 61.3Fog Index 12.2 11.8Lix 41.1 (School year 7) 40.3 (School year 6)MSc Healthcare Informatics RCSEd+Univ of Bath; Unit 5-Remote Healthcare; 2005. Tutor: MNK Boulos; Student: Sanjoy Sanyal 7
  8. 8. SMOG-Grading 11.3 11.1 Figure-19: Readability of the two sites through automated tool The mean readability values (MSWord-derived) for Readability comparison HealthInsite eMedicine HealthInsite and eMedicine) were similar (FRE 47.84 vs.47.7865 (p=0.99); FKGL 10.58 vs.10.56 (p=0.98), respectively). The60 automated test generated higher FRE and lower FKGL than55 MSWord. Conversely, it returned higher Fog index than the50 manual method. The mean scores and scores for combined text45 (MSword-derived) were similar for eMedicine but not so for40 HealthInsite. We found a high negative correlation (–0.96)35 between FRE and FKGL, measured with MSExcel CORREL30 function (Worksheet-3, Chart-1).232520151050 Kincaid ARI Cloeman-Liau Flesch Index Fog Index Lix SMOG- Grading Readability Form ulae FRE / FKGL Correlation FRE-FKGL Correlation FRE FKGL 100 55 9.9 55.6 8.9 49.7 10.4 38.1 12 40.8 11.7 FRE 10 FKGL 56.2 8.8 48.2 11 51.9 9.7 Sheet-3 and Chart-1: FRE - FKGL 41.1 11.7 correlation. Almost perfect negative 41.5 11.6 1 correlation. Double click anywhere onCorrelation -0.964999025 1 2 3 4 5 6 7 8 9 10 table to get MSExcel WorksheetDISCUSSION OF METHODS AND RESULTSHONcode represents a pledge to abide by 8 ethical principles.10 It is not a quality benchmarking instrument/seal. Unethicaldevelopers may cut-and-paste it onto their sites.13 Moreover, sites displaying a HONcode seal may not comply with the code.8They may violate the HONcode after the accreditation was awarded by HON, and before their next infrequent check. Though weinstalled the toolbar plugin,11 these caveats should be kept in mind; HON-toolbar per se does not detect violations in a HON-accredited site.We utilized Web Impact Factor (link/‘peer-review’ popularity) in our pilot study. This is a better indicator of popularity than clickpopularity (frequency of site visitation), which may be manipulated.8,14 By AltaVista search, eMedicine had lower WIF thanhealthfinder® and NHSDirect, and HealthInsite had even lower(Appendix-Table-1). Thus, popularity of a site does notnecessarily correlate with quality.8,14HealthInsite / eMedicine – Critical/Analytical Review/Comparison (Figures-15,16,20a,b,21,22)Benchmarking tools: Score ranges like 0-3, 0-6 etc are pseudo-objective, giving a false sense of mathematical precision.Moreover, low scores in one important criterion may be compensated by high scores in two unimportant criteria, giving the sameoverall score. There is lack of conformity between different tools.15,16 There is also the problem of inter-rater reliability (kappa-value).24,25 Finally there are rater-training and rating-the-rater issues to be considered.14 But in the absence of other means of siteassessment, scoring systems represent the only available fallback. They force our attention towards the important points about ahealth information site. They are considered acceptable if at least some issues (like inter-rater agreement kappa-value >0.6) aredealt with at the outset.24Accessibilty: This determines if the Website meets the W3C-WAI and Bobby standards, if it is ‘future-proof’ and if users canaccess the information.15,26,27 Automated tools/measurements of accessibility are not perfect. They should be used with caution,and their results interpreted with discretion.28 eMedicine failed in the automated tests (Page setup, Access restrictions, Outdatedcode, Dublin core tags). With 87% overall, HealthInsite still did not meet the UK government legal standards.15 By Net Scoring,MSc Healthcare Informatics RCSEd+Univ of Bath; Unit 5-Remote Healthcare; 2005. Tutor: MNK Boulos; Student: Sanjoy Sanyal 8
  9. 9. both sites scored 50% in Accessibility, but we cannot rely on this figure because Net Scoring attached rather low importance tothis category.Usability: This determines if users can find the required information; the ease of use of a service/component. Good usabilityincreases usage, and ‘stickability’. Low usability results in decreased perception of usefulness.12,15,29 With Minervation tool,HealthInsite scored somewhat better than eMedicine. The latter lost out on clarity and site interactivity. Usability underMinervation tool corresponds to Hyperlinks-Design-Interactivity combination under Net Scoring. There was no significantdifference between the two sites with this tool [Table-2, Appendix-Table-6]. HealthInsite relied entirely on external sites (>70) toprovide information, reachable through a series of mouse-clicks. This rendered usability assessment somewhat difficult. This wasnot so with eMedicine, which provided its own material. Both had good print quality, though HealthInsite’s multiple partnersresulted in variable fonts and sizes. Somewhat cluttered and confusing appearance of eMedicine homepage (Figure-16b) renderedit inferior in site design. Both sites had no author contact, and only HealthInsite enabled consumer participation.Reliability: This determines if the site provides relevant, unbiased, or unreliable and potentially harmful information. In asystematic review of Web health information quality, problem was found in 70%.15 Under Minervation tool, eMedicine failed.Main reason was failure to specify currency in all pages and conflicts of interest. Despite providing its content through externalresources, most of HealthInsite’s material was well categorized and sensibly linked together as a coherent whole.30 This categoryroughly corresponds with Content-Credibility components of Net Scoring, which attaches a lot of importance to them. With NetScoring, the composite score difference between the two sites was less [Table-2, Appendix-Tables-4,5]. Both Websites performedpoorly in noting omissions, displaying information categories, name/title of author, source of financing, conflicts of interest, andwebmastering process. Both sites had good editorial review and language quality. Only HealthInsite had provided alternativelanguage facility. (Figure-20a) Figure-20a: HealthInsite –Language optionsA site can only check the quality of its immediately linked pages. It is virtually impossible to verify all subsequent pages that thepartner sites link to; therefore it cannot be considered a quality requirement. HealthInsite had provided a disclaimer to this effect.HealthInsite did not specify influence/bias and had no metadata, while eMedicine did not mention hierarchy of evidence, originalsource, and had no help page and scientific review. But its site had ICRA-label v0231. Net Scoring considers an evolvingtechnology like metadata an essential criterion.13 Net Scoring, but not Minervation tool, had ethics category. It was implicit inHealthInsite and explicit in eMedicine through a ‘Code of ethics’ link. (Figure-22) User freedom Figure-20b: HealthInsite –Re-direction to partner sitesMSc Healthcare Informatics RCSEd+Univ of Bath; Unit 5-Remote Healthcare; 2005. Tutor: MNK Boulos; Student: Sanjoy Sanyal 9
  10. 10. Figure-21: HealthInsite salient points Advertisement Figure-22: eMedicine salient pointsPrivacy policies: Both sites had similar policies with regard to type of information collected, how it was used, under whatcircumstances and to whom it could be disclosed, and use of clickstream data/cookies. HealthInsite adhered to AustralianGuidelines for Federal and ACT Government World Wide Websites, and complied with Information Privacy Principles (Glossary1-3,10,11; Privacy Act). It explained about E-mail privacy, site security and user anonymity. Contact officer’s E-mail wasprovided for privacy-related queries.32 eMedicine gathered information to understand user demographics. It obtained additionalinformation about site traffic/use via Webtrends™ software. It occasionally shared some information in aggregate form (Figures-23a,b).33 With a P3P-enabled browser, in-built in MS-IE6, it could be possible to view sites’ privacy policies and match them withuser’s preferences.12,14 Figure-23a: HealthInsite Privacy StatementMSc Healthcare Informatics RCSEd+Univ of Bath; Unit 5-Remote Healthcare; 2005. Tutor: MNK Boulos; Student: Sanjoy Sanyal 10
  11. 11. Figure-23b: eMedicine PrivacyAdvertising: HealthInsite did not accept advertisements,34 but eMedicine did. Advertisement-containing pages take longer toload, chances of ad-ware/pop-ups/virus attacks increase, pages may be confusing to the uninitiated user, annoying ads may distractthe reader and affect site usability, there may be content bias/consumerism (more commercial than factually-oriented),35advertisers may not strictly adhere to ethical principles, and privacy/cookie policies of advertisers may be at variance with that ofmain site.33 However, discretely placed ads may be a good thing, and sponsored resources may have more user-customizedinformation. Impact of Web advertising needs further research.35Regional cultural/linguistic differences: USA has a substantial Hispanic population, yet eMedicine did not have ‘Espanol’option. It has been claimed that American articles are more difficult to read than British affiliated ones;22 others have challengedit.36 Our findings did not corroborate the original claim. If the Web is to be truly “accessible to everyone”(Berners-Lee)26, and ifwe are to sincerely try to reduce racial/ethnic Internet access disparities (a la Digital Divide), then apart from alternate languageoptions, readability levels must be appropriate for the socio-ethnic minorities.37Readability: There was no significant difference between the two sites. Readability can be tested by using test subjects, readabilityexperts or readability formulae.37 We selected the last approach because of expediency. The beginning, middle, end portions oftext must be selected for testing.20,37 Reliability of the results depends on proper ‘cleaning’ of the documents. Variable resultsfrom the same document and discrepancies between results from different tools arise from improper sampling/cleaning of thedocuments.17,20MSWord and online tool gave opposing results. This also emphasizes the variability between different tools/formulae.19Readability formulae measure structure/composition of text rather than meaning/context; word-length rather than the words.18,23They do not distinguish between written discourse and meaningless sentences.37 Shorter sentences and words with fewer syllablesmight improve readability scores without improving readability.20 Readability formulae do not measure language familiarity,clarity, new concepts, format/design, cultural sensitivity/relevance, credibility/believability and comprehensibility.18,20 They donot address communication/interactivity, reader’s interest, experience, knowledge or motivation, time to read, and the uniquecharacteristics of Internet.37For some European languages within an English document, MSWord displays statistics but not readability scores.38 ApplyingFRE to German documents does not deliver good results.19 The problem of testing Spanish documents is applicable to USA. Apartfrom establishing a panel of pre-tested experts, we need software like Lexiles Framework® that measures readability in Englishand Spanish.37 Given all these constraints, readability scores computed using formulae should be interpreted with caution. Butthey are quick, easy, and better than nothing at all.23Recommendations to HealthInsite/eMedicineThe following site-specific recommendations are based on the deficiencies noted in the sites. Appendix-Box-6 gives some genericprinciples by Nielsen and Constantine.39HealthInsite Accessibility: -Implement Table Summaries (let visually-impaired users know what is in a table)15 -Implement HTTP-Equivalent Content-Type in header (W3C requirement)12,15 Usability: -Integrate non-textual media (visuals) in Website23 -Consistent style, avoid excessive fonts and size-variations12 Reliability: -Specify influence, bias16eMedicine Accessibility:MSc Healthcare Informatics RCSEd+Univ of Bath; Unit 5-Remote Healthcare; 2005. Tutor: MNK Boulos; Student: Sanjoy Sanyal 11
  12. 12. -Implement metadata-Dublin core title tags (compatibility with NHS directives)15 -Eliminate outdated codes-HTML elements that would not be used in future versions; specifically body and colour font tags15 -Use stylesheets (efficient/consistent design practices)15 -Implement Image Alt Tags and Table Summaries (let visually-impaired users know what is in an image and table, respectively)15 -Implement DTD (makes site XML-compatible/future-proof)15 -Implement HTML Language Definition (Bobby recommendation)15,40 Usability: -Clear statement of who this Website is for16 -Render Website user-interactive (user personalisation)12,16 -Make page more neat and trim12,16,39 -Provide Help page16,39 -Alternate language options16 -Advertising links discretely-placed; separate from general content35 -Reduce page download time12 Reliability: -Content-specific feedback mechanism; provide forums/chat (to submit content-specific comments)16 -Currency-update content at appropriate intervals; mention last update/review15,16,41,42 -State hierarchy/levels of evidence (for medical decisions)14,16 -Specify scientific review process16 -Provide information on how to evaluate online health information16Both Usability: -Link persistence; verify functioning hyper-links; no broken links12,16,29,43 Reliability: -Implement MedCIRCLE labeling -follow-up of MedCERTAIN, implements HIDDEL vocabulary (latter is based on MedPICs and is a further development of PICS)14,29,44 -Mention source of information (let users verify from the original source)15,16 -Author’s name/title, contact information on each document12,22,35 -Note any omissions16 -Clearly display information categories (factual data, abstracts, full-text documents)16 -Proper/formal ‘What’s new’ page16 -Specify conflicts of interest (financing source, author independence etc)12,15,16 -Mention Webmastering process16 Readability: -Scale readability level down to 6th Grade level37,45 -Have ‘readability seal’ along lines of HONcode seal (inform readers of reading ease/difficulty level)23Lessons Learned from StudyOur study had several limitations. Our methodology and results have not been validated in independent studies. Only two sites andlimited content were compared over a short period.22,35 Due to the dynamic nature of the Web, some of our findings may changeover time.35 Our study did not evaluate the advertised resources in eMedicine, which may have had more customizedinformation.35 Accuracy was assessed only by the author; more objective measures for evaluation must be established.8,35But we learned several lessons. Both our test sites, when run through the gauntlet of a quasi-mathematical objective scoringsystem, did not meet the UK government legal standards.16 ‘Popular’ resources are not good enough, and/or quality benchmarkingtools employ criteria that are too difficult to fulfill. Quality benchmarking of online health information resources is a strenuoustask. This is compounded by the fact that rating/scoring systems/tools and readability tools are not perfect, with considerable lackof conformity between them.15,16,19 Usability is a subjective assessment while reliability/content is more objective.35 Rating toolsare more useful for researchers/informaticians rather than for patients and clinicians.35 People are relying more on the Internet forhealth information.1 Our study may provide a basis for clinicians to guide patients seeking relevant/reliable Web healthinformation.35Medical knowledge should be treated as a single blob/pool of knowledge with uniform accessibility to professionals and public.This viewpoint has its supporters and dissenters.23 Yet, the Internet is rife with different information sets for professionals andpublic (Internet in Health and Healthcare; slides 5-14/15-21).12 Our pilot study highlighted the essential differences betweenpatient/consumer and professional medical/health information resources (NeLH/Cochrane, for example). Our final studymethodology/results may be generalized to the former but not to the latter types of resources. For these we require different tools;Oxford CEBM,13 ANAES method (level of evidence for therapy),14 DISCERN guidelines (for treatment choices), Medical Matrixstar ranking system, AMA Guidelines, HSWG Criteria,13 CONSORT statement (for randomized trials), QUORUM statement (forsystematic reviews), and CHERRIES statement (for Internet E-surveys).46Public health information resources are supposed to be gateways to public education. This involves providing reliable/accurateinformation, and informing them how to assess the quality of information. HealthInsite had taken cognizance of these points. Thisalso entails keeping in mind the literacy/readability levels of the average population. Average public readability level is usuallyMSc Healthcare Informatics RCSEd+Univ of Bath; Unit 5-Remote Healthcare; 2005. Tutor: MNK Boulos; Student: Sanjoy Sanyal 12
  13. 13. lower than the school grade level completed.37 The estimated reading age level of UK general population is 9 years.23,47 About47% of US population demonstrate low literacy levels.48 OECD considers level 3 as minimum requirement for modern life;considerable proportion of the population is below that.49 Most of the evaluated documents required lengthy scrolling, carriedsmall font, and ranked ‘Difficult’/‘Fairly difficult’ (Figure-24). Similar findings were noted by others.22,23,47,50-52 They should bescaled down to the level appropriate to the target audience, i.e. ‘Standard English’/‘Fairly easy’.48,50 Improving readability willenhance their public consumption.22,23,47,50 We had to employ different tools to evaluate Websites and readability. Ideally, qualitybenchmarking checklists should include parameters for testing readability also.23 Figure-24: FRE vs. Comparable literature (Breese et al, JAMA 2005)Figure 25 outlines the complex inter-relationships between quality, accuracy, trust and popularity of Websites, elucidated fromvarious studies.8,53,54 But there is no uniformity between quality indicators,15,16 and current quality criteria cannot identifypotentially harmful online health information.24 We found HealthInsite better than eMedicine, though both were HONcode-accredited and had comparable accuracy. Further studies are required to establish the true inter-relationships. HONcode logo Organisation domain Copyright display Accuracy Quality Author/medical credentials ≠ Accuracy Lack of currency ≠ Inaccuracy Presence of advertisements ≠ Inaccuracy Quality ≠ Popularity Type of Website → Popularity Trust Ref: Meric F et al 2002; Fallis et al 2002; Lampe et al 2003 Figure-25: Complex inter-relationships (quality/accuracy/trust/popularity) – far from perfectMaking Evaluation/Comparison BetterGeneralisability: Ideally we should evaluate ~200 Websites,8 a variety of subjects (diabetes, hypertension, asthma, Alzheimer’s,lung/colon cancer, etc), and include more topics under each disease. To generalize our findings we need broader studies.22,35Accuracy assessment: In our study the author’s personal knowledge of breast cancer was utilised to assess accuracy. But such maynot be the case for all topics/illnesses. We need to use a panel of experts for each topic8 and/or develop an instrument fromauthoritative sources (viz. Harrison OnLine) for each topic, to assess accuracy of Web content.53 Likewise readability tests shouldideally be supplemented by feedback from a panel of readability experts.23Objectively measuring Web content: ‘Concise’ or ‘scannable’ or ‘objective’ Web content increases readability by 58%, 47% and27% respectively; all three attributes increases readability by 124%.55 We should objectively measure Web content quality usingNielsen’s five usability matrices (Appendix-Box-7).55Refining readability scoring: There are many readability tests/formulae (Appendix-Box-8)18-21,37. Ideally a combination of severaltests37 that incorporates the best parameters from all tests should be utilized. Using different tests/formulae will serve to cross-check the readability scores of the text pieces under study, and also serve to validate one tool against the other. We have tried toachieve both these to a limited extent in our study.Comprehensive quality benchmarking system: A comprehensive quality benchmarking tool may be developed by pooling the bestcriteria from all systems currently available. Even better would be an intelligent software wizard, which automatically qualifies aWebsite according to pre-programmed criteria.13 It is emphasized that tools/wizards can never fully replace humans in qualitybenchmarking tasks; they can only help them work more efficiently and ensure they follow the required evaluation protocol.13MSc Healthcare Informatics RCSEd+Univ of Bath; Unit 5-Remote Healthcare; 2005. Tutor: MNK Boulos; Student: Sanjoy Sanyal 13
  14. 14. Such a system would require defining a standard (core) statement and developing criteria, perhaps based on AMOUR principle,which specifies the required quality level to satisfy the standard.13 The ideal scenario would be intelligent software that performsautomatic site and readability scoring, using best-of-breed criteria for both measurements.13SummaryQuality control of Internet health information rests on four stanchions: consumer education, self regulation, third-party evaluation,and sanction enforcements.56 The basic ingredients for usefulness are Usability, Relevance, Integrability and Quality.29 Nielsen’s‘usability heuristics’ should cover all aspects of usability.40 Education of online user is important. Silberg’s 4-core criteria/JAMA-benchmark (Authorship, Attribution, Currency, Disclosure) is the bare minimum to be looked for in a Website.8,14,42 Additionalpoints are Accessibility, Privacy and Transparency (EU Guidelines)41. Website popularity is not an essential quality requirement.14REFERENCES1. Health Information online: an American survey from the Pew Internet Project. May 2005. URL:http://www.pewinternet.org/pdfs/PIP_Healthtopics_May05.pdf (Accessed 24 June 2005).2. Eysenbach G, Ryoung Sa E, Diepgen TL. Shopping around the Internet today and tomorrow: towards the millennium ofcybermedicine. BMJ November 1999;319:1294. URL: http://bmj.bmjjournals.com/cgi/content/full/319/7220/1294(Accessed 1 June 2005).3. Coiera EW. Will the Internet replace your doctor? Digital doctors. 1999. URL: http://abc.net.au/future/health.htm(Accessed 1 June 2005)4. Coiera E. Information epidemics, economics, and immunity on the Internet. BMJ November 1998; 317:1469-1470.5. Reaney P. Kylies case shows breast cancer can strike early. Reuters website. May 2005. URL:http://www.reuters.co.uk/newsArticle.jhtml?type=healthNews&storyID=8517644&section=news&src=rss/uk/healthNews(Accessed 1 June 2005).6. BREAST CANCER FACTS AND FIGURES. myDr homepage. March 2001. URL:http://www.mydr.com.au/default.asp?article=2942 (Accessed 1 June 2005).7. Kylies cancer surgery a success. Yahoo UK news. May 2005. URL: http://uk.news.yahoo.com/050521/325/fjhlq.html(Accessed 1 June 2005).8. Meric F, Bernstam EV, Mirza NQ, Hunt KK, Ames FC, Ross MI, Kuerer HM, Pollock RE, Musen MA and Singletary Eva S.Breast cancer on the world wide web: cross sectional survey of quality of information and popularity of websites. BMJ2002;324;577-81. URL: http://bmj.com/cgi/content/full/324/7337/577 (Accessed 1 June 2005)9. HONcode. Health On the Net Foundation website. URL: http://www.hon.ch/HONcode/ (Accessed 1 June 2005).10. HON Code of Conduct (HONcode) for medical and health Web sites. Health On the Net Foundation website. URL:http://www.hon.ch/HONcode/Conduct.html (Accessed 1 June 2005).11. HONcode Toolbar. URL: http://www.hon.ch/HONcode/Plugin/Plugins.html (Accessed 1 June 2005)12. Boulos MNK. Internet in Health and Healthcare. URL: http://www.e-courses.rcsed.ac.uk/mschi/unit5/KamelBoulos_Internet_in_Healthcare.ppt (Accessed 1 June 2005).13. Boulos MNK, Roudsari AV, Gordon C, Gray JAM. The Use of Quality Benchmarking in Assessing Web Resources for theDermatology Virtual Branch Library of the National electronic Library for Health (NeLH). J Med Internet Res 2001;3(1):e5 URL: http://www.jmir.org/2001/1/e5/ (Accessed 1 June 2005)14. Boulos MNK. On quality benchmarking of online medical/health-related information resources. University of Bath School forHealth. March 2004. URL: http://staff.bath.ac.uk/mpsmnkb/MNKB_Quality.PDF (Accessed 1 June 2005).15. The LIDA Instrument version 1.2 – Minervation validation instrument for health care web sites. © 2005 Minervation Ltd. URL: http://www.minervation.com/mod_lida/minervalidation.pdf (Accessed 12 June 2005).16. Net Scoring ®: criteria to assess the quality of Health Internet information. Last updated 2001. URL: http://www.chu-rouen.fr/netscoring/netscoringeng.html (Accessed 1 June 2005).MSc Healthcare Informatics RCSEd+Univ of Bath; Unit 5-Remote Healthcare; 2005. Tutor: MNK Boulos; Student: Sanjoy Sanyal 14
  15. 15. 17. Boulos MNK. Activity: Readability of online public/patient health information services. Royal College of Surgeons ofEdinburgh message board site. 2004. URL: http://www.e-courses.rcsed.ac.uk/mb3/msgs/dispmessage.asp?MID=MID2004417225527533 (Accessed I June 2005).18. Everything you ever wanted know about readability tests but were afraid to ask. In: Klare, A Second Look at the validity ofReadability Formulas. Journal of Reading Behaviour 1976; 8:129-52. URL:http://www.gopdg.com/plainlanguage/readability.html (Accessed 1 June 2005).19. Readability.Info. © 2004 by Dave Taylor & Intuitive Systems. URL: http://www.readability.info/uploadfile.shtml(Accessed 12 June 2005).20. Doak LG, Doak CC, eds. Pfizer Principles for Clear Health Communication, 2nd Ed. New York: Pfizer Inc., 2004. URL:http://www.pfizerhealthliteracy.com/pdfs/Pfizers_Principles_for_Clear_Health_Communication.pdf (Accessed 4 June 2005)21. Readability Calculations. Micro Power & Light Co. Dallas, TX. URL:http://www.micropowerandlight.com/rdformulas.html (Accessed 5 June 2005)22. Weeks WB, Wallace AE. Readability of British and American medical prose at the start of the 21st century. BMJ December2002;325:1451-2. URL: http://bmj.bmjjournals.com/cgi/content/full/325/7378/1451 (Accessed 1 June 2005)23. Boulos MNK. British Internet-Derived Patient Information on Diabetes Mellitus: Is It Readable? DIABETESTECHNOLOGY & THERAPEUTICS 2005; 7(3). © Mary Ann Liebert, Inc. URL: http://www.e-courses.rcsed.ac.uk/mb3/msgs/dispattachment.asp?AID=AID200557131244705 (Accessed 1 June 2005).24. Walji M, Sagaram S, Sagaram D, Meric-Bernstam F, Johnson C, Mirza NQ, Bernstam EV. Efficacy of Quality Criteria toIdentify Potentially Harmful Information: A Cross-sectional Survey of Complementary and Alternative Medicine Web Sites. JMed Internet Res 2004;6(2):e21. URL: http://www.jmir.org/2004/2/e21/ (Accessed 20 June 2005).25. Downs SH, Black N. The feasibility of creating a checklist for the assessment of the methodological quality both ofrandomised and non-randomised studies of health care interventions. J Epidemiol Community Health. 1998 Jun;52(6):377-84. URL: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=9764259(Accessed 24 June 2005).26. Web Accessibility Initiative (WAI). W3C® Web Accessibility Initiative website. Last revised June 2005. URL:http://www.w3.org/WAI/ (Accessed 20 June 2005)27. Kitemarks. Judge: web sites for health. Last updated September 2004. URL:http://www.judgehealth.org.uk/how_judge_kitemarks.htm (Accessed 20 June 2005).28. Inaccessible website demo. Disability Rights Commission. 2005. URL: http://www.drc.org.uk/newsroom/demo.asp (Accessed 20 June 2005).29. Boulos MNK. Optimising the Utility of the NeLH VBL for Musculoskeletal Diseases—Technical Considerations. URL:http://healthcybermap.semanticweb.org/publications/nelh27Nov02.ppt (Accessed 1 June 2005)30. Boulos MNK. What classes as a website. Royal College of Surgeons of Edinburgh message board. 2003. URL:http://www.e-courses.rcsed.ac.uk/mb3/msgs/dispmessage.asp?MID=MID200312222458705 (Accessed 20 June 2005).31. ICRA (Internet Content Rating Association). ©1999-2003 Internet Content Rating Association®. URL:http://www.icra.org/about/ (Accessed 1 June 2005).32. HealthInsite Privacy Statement. HealthInsite Website. Last Updated Oct 2004. URL:http://www.healthinsite.gov.au/content/internal/page.cfm?ObjID=00063FB9-061E-1D2D-81CF83032BFA006D (Accessed 22June 2005).33. Privacy. eMedicine Health Website. URL: http://www.emedicinehealth.com/common/privacy.asp (Accessed 22 June2005).34. HealthInsite Disclaimer. Updated March 2005. URL:http://www.healthinsite.gov.au/content/internal/page.cfm?ObjID=0006CF21-0624-1D2D-81CF83032BFA006D (Accessed 22June 2005).MSc Healthcare Informatics RCSEd+Univ of Bath; Unit 5-Remote Healthcare; 2005. Tutor: MNK Boulos; Student: Sanjoy Sanyal 15
  16. 16. 35. Bedell SE., Agrawal A, Petersen LE. A systematic critique of diabetes on the world wide web for patients and theirphysicians. International Journal of Medical Informatics 2004. URL: http://www.e-courses.rcsed.ac.uk/mb3/msgs/dispattachment.asp?AID=AID2004811113029705 (Accessed 1 June 2005).36. Albert T. Letter to Editor: Transatlantic writing differences are probably exaggerated. BMJ March 2003;326:711. URL:http://bmj.bmjjournals.com/cgi/content/full/326/7391/711 (Accessed 1 June 2005).37. CHAPTER 4 -READABILITY ASSESSMENT OF HEALTH INFORMATION ON THE INTERNET. URL:http://www.rand.org/publications/documents/interneteval/interneteval.pdf/chap4.pdf (Accessed 23 June 2005)38. Microsoft® Office Word 2003 (11.5604.5606). Part of Microsoft Office Professional Edition 2003. Copyright © 1983-2003Microsoft Corporation.39. APPENDIX A-3. HEURISTIC GUIDELINES FOR EXPERT CRITIQUE OF A WEB SITE. Evaluation Design/Planning andMethodology for the NIH Web Site – Phase 1. URL: http://irm.cit.nih.gov/itmra/weptest/app_a3.htm#usability (Accessed 1June 2005).40. Bobby. © 2003-2004 Watchfire Corporation. URL: http://bobby.watchfire.com/bobby/html/en/index.jsp (Accessed 20June 2005).41. Quality Criteria for Health Related Websites. Europa European Union Website. Last updated March 2005. URL:http://europa.eu.int/information_society/eeurope/ehealth/quality/draft_guidelines/index_en.htm (Accessed 25 June 2005).42. Silberg WM, Lundberg GD, Musacchio RA. Assessing, controlling, and assuring the quality of medical information on theInternet: Caveant lector et viewor--Let the reader and viewer beware. JAMA April 1997 16;277(15):1244-5.43. Managing Web Resources for Persistent Access. National Library of Australia. March 2001. URL:http://www.nla.gov.au/guidelines/persistence.html (Accessed 26 June 2005).44. MedCIRCLE The Collaboration for Internet Rating,Certification, Labeling and Evaluation of Health Information. Lastupdated 17 Dec 2002. URL: http://www.medcircle.org/ (Accessed 20 June 2005)45. Ask Me 3 (Pfizer Inc.): Advancing Clear Health Communication to Positively Impact Health Outcomes (ProfessionalPresentation Tool Kit). Internet document 2003. URL: http://www.askme3.org/PFCHC/professional_presentation.ppt(Accessed 24 June 2005).46. Eysenbach G. Improving the quality of Web surveys: the Checklist for Reporting Results of Internet E-Surveys (CHERRIES).J Med Internet Res. 2004 Sep 29;6(3):e34. URL:http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=15471760&dopt=Abstract (Accessed 20June 2005).47. Chestnutt IG. Internet-derived patient information on common oral pathologies: is it readable? Prim Dent Care. 2004Apr;11(2):51-4. URL:http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=Abstract&list_uids=15119094 (Accessed 23June 2005).48. Clear & Simple: Developing Effective Print Materials for Low-Literate Readers. National Cancer Institute Website. Updated27 Feb 2003. URL: http://www.cancer.gov/aboutnci/oc/clear-and-simple/ (Accessed 24 June 2005)49. Office for National Statistics, UK: Adult Literacy Survey: Literacy Level of Adults by Gender and Age. Internet document1996. URL: http://www.statistics.gov.uk/StatBase/Expodata/Spreadsheets/D5047.xls (Accessed 25 June 2005)50. Breese P, Burman W. Readability of Notice of Privacy Forms Used by Major Health Care Institutions. JAMA (Reprinted)April 2005; 293 (13):1593-4. URL: http://www.e-courses.rcsed.ac.uk/mb3/msgs/dispattachment.asp?AID=AID2005513141638705 (Accessed 1 June 2005).51. Jaffery JB, Becker BN. Evaluation of eHealth web sites for patients with chronic kidney disease. Am J Kidney Dis. 2004Jul;44(1):71-6. URL:http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=Abstract&list_uids=15211440 (Accessed 24June 2005)52. Kirksey O, Harper K, Thompson S, Pringle M. Assessment of Selected Patient Educational Materials of Various ChainPharmacies. J Health Commun. 2004;9(2):91-93. URL:MSc Healthcare Informatics RCSEd+Univ of Bath; Unit 5-Remote Healthcare; 2005. Tutor: MNK Boulos; Student: Sanjoy Sanyal 16
  17. 17. http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=Abstract&list_uids=15204820 (Accessed 24June 2005)53. Fallis D, Fricke M. Indicators of accuracy of consumer health information on the Internet: a study of indicators relating toinformation for managing fever in children in the home. J Am Med Inform Assoc. 2002 Jan-Feb;9(1):73-9. URL:http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=Abstract&list_uids=11751805 (Accessed 1June 2005).54. Lampe K, Doupi P, van den Hoven MJ. Internet health resources: from quality to trust. Methods Inf Med. 2003;42(2):134-42. URL: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=12743649&dopt=Abstract(Accessed 24 June 2005).55. Morkes J, Nielsen J. Concise, SCANNABLE, and Objective: How to Write for the Web. October 1997. URL:http://www.useit.com/papers/webwriting/writing.html (Accessed 4 June 2004).56. Eysenbach G. Consumer health informatics. BMJ June 2000;320:1713-1716. URL:http://bmj.bmjjournals.com/cgi/content/full/320/7251/1713 (Accessed 1 June 2005).List of abbreviationsAMA: American Medical AssociationAMOUR: Achievable, Measurable, Observable, Understandable ReasonableCEBM: Centre for Evidence Based MedicineCHERRIES: Checklist for Reporting Results of Internet E-SurveysDTD: Document Type DefinitionFKGL: Flesch-Kincaid Grade LevelFRE: Flesch Reading EaseHIDDEL: Health Information Disclosure, Description and Evaluation LanguageHONcode: Health On the Net Foundation code of conductHSWG: Health Summit Working GroupHTML: Hypertext Markup LanguageIE: Internet ExplorerMedCERTAIN: MedPICS Certification and Rating of Trustworthy and Assessed Health Information on the NetMedCIRCLE: Collaboration for Internet Rating, Certification, Labeling and EvaluationMS-IE: Microsoft Internet ExplorerNeLH: National electronic Library for Health (now, National Library for Health)P3P: Platform for Privacy Preferences ProjectPICS: Platform for Internet Content SelectionSMOG: Simple Measure of GobbledegookW3C: World Wide Web ConsortiumWAI: Web Accessibility InitiativeOECD: Organisation for Economic Co-operation and DevelopmentWIF: Web Impact FactorAPPENDICES Appendix-Box-A: Websites included in pilot study 1. MedlinePlus: http://medlineplus.gov/ or http://www.medlineplus.gov (Accessed 1 June 2005) 2. healthfinder®: http://www.healthfinder.gov/ (Accessed 1 June 2005) 3. HealthInsite: http://www.healthinsite.gov.au/ (Accessed 1 June 2005) 4. HealthConnect: http://www.healthconnect.gov.au (Accessed 1 June 2005) 5. NHS Direct Online: http://www.nhsdirect.nhs.uk (Accessed 1 June 2005) 6. NeLH (National electronic Library for Health); now called NLH (National Library of Health): http://www.nelh.nhs.uk/ or http://www.nlh.nhs.uk (Accessed 1 June 2005) 7. Cochrane Library: Through NeLH; this also goes through Wiley Interscience interface http://www.nelh.nhs.uk/cochrane.asp; through Wiley Interscience interface http://www3.interscience.wiley.com/cgi-bin/mrwhome/106568753/HOME or http://www.mrw.interscience.wiley.com/cochrane/ (Accessed 1 June 2005) 8. DIPEx (Database of Individual Patient Experiences): http://www.dipex.org (Accessed 1 June 2005) 9. NIHSeniorHealth: http://nihseniorhealth.gov/ (Accessed 1 June 2005) 10. eMedicine Health: http://www.emedicinehealth.com/ (Accessed 1 June 2005)MSc Healthcare Informatics RCSEd+Univ of Bath; Unit 5-Remote Healthcare; 2005. Tutor: MNK Boulos; Student: Sanjoy Sanyal 17
  18. 18. Appendix-Box-1: Centre for Health Information Quality (C-H-i-Q) checklist13 1. Accessibility: Information is in appropriate format for target audience 2. Accuracy: Information is based on best available evidence 3. Appropriateness: Information communicates relevant messages 4. Availability: Information is available to wide audience 5. Currency: Information is up-to-date 6. Legibility: Written information is clearly presented 7. Originality: Information not already produced for the same audience in the same format 8. Patient involvement: Information is specifically designed to meet needs of patient 9. Reliability: Information addresses all essential issues 10. Readability: Words / sentences are kept short; jargon minimized Appendix-Table-1: Scores of Websites in Pilot StudyFeatures MedlinePlus Health- Health Health NHS NeLH/ Cochrane DIPEx NIH eMedicine finder® Insite Connect Direct NLH Library Senior Online HealthAccessibility 5 4 5 Site under 4 1 0 2 2 4Accuracy 4 3 5 re- 4 5 5 2 2 4Appropriateness 4 4 5 development 5 3 2 1 2 5Availability 5 4 5 5 2 1 3 2 4Currency 3 4 4 4 4 4 3 3 3Legibility 5 2 5 5 2 2 2 3 5Originality 5 2 5 5 5 4 4 5 5Patient 5 5 5 5 2 2 5 5 5involvementReliability 3 3 4 4 3 3 1 2 4Readability 3 3 4 3 1 1 2 3 4 C-H-i-Q 42 34 47 0 44 28 24 25 29 43 subtotalWeb Impact 53,100 376,000 36,000 142 results 231,000 43,300 72 results 1,010 1,800 56,900Factor (Alta results results results (2) results / 255 (1) results results resultsVista) [0-99=1; (4) (5) (4) (5) results (3) (3) (4) (4/2)100-999=2;1000-9999=3;10000-99999=4100000+=5]Homepage 1 1 1 0 0 0 0 0 0 1HONcode-accreditedLanguage 1 1 1 0 0 0 1 0 0 0option(s)Breast cancer 0 1 1 0 0 0 0 0 0 1page HONcode-accreditedAdditional 0 0 0 0 1 (Lang 0 0 0 1 (Text 1 (ICRAfeatures options size, label) in audio contrast, clips) speech) Miscellaneous 6 8 7 2 6 4 2 3 4 7 subtotal Total Score 48 42 54 2 50 32 26 28 33 50MSc Healthcare Informatics RCSEd+Univ of Bath; Unit 5-Remote Healthcare; 2005. Tutor: MNK Boulos; Student: Sanjoy Sanyal 18
  19. 19. Appendix-Box-2: Minervation tool parameters15 Level 1 (Accessibility) (Maximum 63 points) 1. Page setup | Automated test (maximum 57 points including all 4 automated tests) 2. Access restrictions | -do- 3. Outdated code | -do- 4. Dublin core title tags | -do- 5. Browser test (Maximum 3 points) 6. Registration (Maximum 3 points) Level 2 (Usability) (Maximum 54 points) 1. Clarity (6 questions; maximum 18 points) 2. Consistency (3 questions; maximum 9 points) 3. Functionality (5 questions; maximum 15 points) 4. Engagibility (4 questions; maximum 12 points) Level 3 (Reliability) (Maximum 27 points + 24 supplemental points) 1. Currency (3 questions; maximum 9 points) 2. Conflicts of interest (3 questions; maximum 9 points) 3. Content production (3 questions; maximum 9 points) 4. Content production procedure – supplemental (5 questions; maximum15 points) 5. Output of content - supplemental (3 questions; maximum 9 points)Appendix-Box-3: Flesch Reading Ease (FRE) scoreThis readability score is normally used to assess adult materials.21 It bases its rating on the average number of syllables per word (ASW) andwords per sentence (ASL, i.e. Average Sentence Length). It rates text on a scale of 0 to 100; the higher the score, the easier it is to understand thedocument. The score for ‘plain English’ is 65. Flesch scores of <30 indicate extremely difficult reading, like in a legal contract.22Formula for FRE scoreFRE = 206.835 – (1.015 x ASL) – (84.6 x ASW); ASL = Average sentence length (number of words / number of sentences); ASW = Averagenumber of syllables per word (number of syllables / number of words)Appendix-Box-4: Flesch-Kincaid Grade Level (FKGL) scoreThis is most reliable when used with upper elementary and secondary materials.21 It also bases its rating on ASW and ASL. It rates texton a U.S. grade-school level (a rough measure of how many years of schooling it would take someone to understand the content, with a top scoreof 12). A score of 5.0 means that a fifth grader 10-year old can understand the document. For most standard documents, we should aim for ascore of approximately 5.0.Formula for FKGL scoreFKGL = (.39 x ASL) + (11.8 x ASW) – 15.59Appendix-Box-5: Gunnings Fog IndexIt is widely used in health care and general insurance industries for general business publications.21 FOG scores of >16 indicateextremely difficult reading, like in a legal contract.22Calculating Fog Index18(A) Total number of words is divided by total number of sentences to give average number of words per sentence(B) Number of words with >3 syllables is divided by total number of words to give the percentage of difficult words(C) Sum of two figures (A) and (B) is multiplied by 0.4. This is the Fog Index in years of educationOthers19Fog Index = 0.4*(wds/sent+100*((wds >= 3 syll)/wds))ARI = 4.71*chars/wds+0.5*wds/sentences-21.43Coleman-Liau = 5.89*chars/wds-0.3*sentences/(100*wds)-15.8Lix = wds/sent+100*(wds >= 6 char)/wdsSMOG-Grading = square root of (((wds >= 3 syll)/sent)*30) + 3 Appendix-Box-5a: Configuring MSWord to display Readability Scores Tools menu →Options→ Spelling & Grammar tab Check grammar with spelling check box selected Show readability statistics check box selected; clicked OK Appendix-Table-2: Comparison on the basis of Minervation online toolMSc Healthcare Informatics RCSEd+Univ of Bath; Unit 5-Remote Healthcare; 2005. Tutor: MNK Boulos; Student: Sanjoy Sanyal 19
  20. 20. Parameter HealthInsite eMedicineLevel 1 (Accessibility) (Maximum=63points)First four automated tests (Maximum=57points) 50 28-Browser test (Maximum=3points) 3 3-Registration (Maximum=3points) 3 3 Subtotal (% of 63) 56 (88.9%) 34 (54%)Level 2 (Usability) (Maximum=54points)Clarity (6 questions; maximum=18points)-Is there a clear statement of who this web site is for? 3 0-Is the level of detail appropriate to their level of knowledge? 3 2-Is the layout of the main block of information clear and readable? 2 2-Is the navigation clear and well structured? 3 3-Can you always tell your current location in the site? 2 3-Is the colour scheme appropriate and engaging? 2 2Consistency (3 questions; maximum=9points)-Is the same page layout used throughout the site? 2 3-Do navigational links have a consistent function? 3 3-Is the site structure (categories or organisation of pages) applied 3 3consistently?Functionality (5 questions; maximum=15points)-Does the site provide an effective search facility? 3 3-Does the site provide effective browsing facilities? 3 3-Does the design minimise the cognitive overhead of using the site? 2 2-Does the site support the normal browser navigational tools? 2 2-Can you use the site without third party plug-ins? 3 3Engagibility (4 questions; maximum=12points)-Can the user make an effective judgment of whether the site applies 3 2to them?-Is the web site interactive? 3 0-Can the user personalise their experience of using the site? 3 1-Does the web site integrate non-textual media? 0 2 Subtotal (% of 54) 45 (83.3%) 39 (72.2%)Level 3 (Reliability) (Maximum=27points)Currency (3 questions; maximum=9points)-Does the site respond to recent events? 3 (‘News’ link) 2 (eNews letter would be sent)-Can users submit comments on specific content? 3 (‘Consumer participation’ 0 link-Is site content updated at an appropriate interval? 3 (Mentioned) 1Conflicts of interest (3 questions; maximum=9points)-Is it clear who runs the site? 3 (Australian government) 3 (private company)-Is it clear who pays for the site? 3 (-do-) 0 (cannot tell)-Is there a declaration of the objectives of the people who run the 3 3site?Content production (3 questions; maximum=9points)-Does the site report a clear content production method? 3 (‘About HealthInsite’ link) 3 (‘About us’ link)-Is this a robust method? 2 2-Can the information be checked from original sources? 0 (Can’t tell) 0 (Can’t tell) Subtotal (% of 27) 23 (85.2%) 14 (51.8%) Grand total (% of 144) 124 (86.1%) 87 (60.4%)Appendix-Table-3: Automated Accessibility results of HealthInsite and eMedicine (Minervation)HealthInsite1.1 Page Setup 80 %1.1.1 Document Type Definition 31.1.2 HTTP-Equiv Content-Type (in header) 01.1.3 HTML Language Definition 31.1.4 Page Title 31.1.5 Meta Tag Keywords 3MSc Healthcare Informatics RCSEd+Univ of Bath; Unit 5-Remote Healthcare; 2005. Tutor: MNK Boulos; Student: Sanjoy Sanyal 20
  21. 21. 1.2 Access Restrictions 66 %1.2.1 Image Alt Tags 3 http://www.healthinsite.gov.au scores 87%. Medium 1.1 Page Setup Pass rate of ~80% Medium1.2.2 Specified Image Widths 21.2.3 Table Summaries 0 1.2 Access Restrictions Pass rate of ~66% Medium1.2.4 Frames 3 1.3 Outdated Code Pass rate of ~100% High 1.4 Dublin Core Tags Pass rate of ~100% High1.3 Outdated Code 100 %1.3.1 Body Tags - Body Background Colour 31.3.2 Body Tags - Body Topmargin 31.3.3 Body Tags - Body Margin Height 31.3.4 Table Tags - Table Background Colour 31.3.5 Table Tags - Table Column (td) Height 31.3.6 Table Tags - Table Row (tr) Height 31.3.7 Font Tags - Font Color 31.3.8 Font Tags - Font Size 31.3.9 Align (non style sheet) 31.4 Dublin Core Tags 100 %1.4.1 Dublin Core Title Tag 3Accessibility: 87 % (50 / 57)TOTAL RATING 87 % (50 / 57)eMedicine1.1 Page Setup 60 %1.1.1 Document Type Definition 01.1.2 HTTP-Equiv Content-Type (in header) 3 http://www.emedicinehealth.com/ scores 49%. Low1.1.3 HTML Language Definition 0 1.1 Page Setup pass rate of ~60% Medium1.1.4 Page Title 3 1.2 Access Restrictions pass rate of ~50% Low1.1.5 Meta Tag Keywords 3 1.3 Outdated Code pass rate of ~48% Low 1.4 Dublin Core Tags pass rate of ~0% Low1.2 Access Restrictions 50 %1.2.1 Image Alt Tags 11.2.2 Specified Image Widths 21.2.3 Table Summaries 01.2.4 Frames 31.3 Outdated Code 48 %1.3.1 Body Tags - Body Background Colour 01.3.2 Body Tags - Body Topmargin 01.3.3 Body Tags - Body Margin Height 01.3.4 Table Tags - Table Background Colour 21.3.5 Table Tags - Table Column (td) Height 31.3.6 Table Tags - Table Row (tr) Height 31.3.7 Font Tags - Font Color 01.3.8 Font Tags - Font Size 31.3.9 Align (non style sheet) 21.4 Dublin Core Tags 0%1.4.1 Dublin Core Title Tag 0Accessibility: 49 % (28 / 57)TOTAL RATING 49 % (28 / 57) Appendix-Table-4: Comparison of Content category (Net Scoring)MSc Healthcare Informatics RCSEd+Univ of Bath; Unit 5-Remote Healthcare; 2005. Tutor: MNK Boulos; Student: Sanjoy Sanyal 21
  22. 22. Content (information) quality (Content category) (Maximum=87points) HealthInsite (Australia) eMedicine Consumer Health (USA)Accuracy (essential criterion) 9 9Hierarchy of evidence (important 6 (‘Reviews of Evidence for 0 (Not specified in any page)criterion) Treatments’)Original Source Stated (essential 9 (Most pages mentioned it) 0 (Not specified in any page)criterion)Disclaimer (important criterion) 6 (Disclaimer provided) 6 (Disclaimer provided)Logic organization (navigability) 7 (Pages redirected to partner websites, 9(essential criterion) with notification)Quality of the internal search engine 6 6(important criterion)General index (important criterion) 6 6What’s new page (important criterion) 4 (‘News’ and ‘HealthInsite Newsletter’) 3 (‘eMedicine Spotlight’)Help page (minor criterion) 3 0Map of the site (minor criterion) 3 3Omissions noted (essential criterion) 0 (None) 0Fast load of the site and its different 6 4 (Ads reduced speed of loading)pages (important criterion)Clear display of available information 0 (None) 0categories (factual data, abstracts, full-text documents, catalogue, databases)(important criterion) SUBTOTAL (%of 87) 65 (74.7%) 46 (52.9%) Appendix-Table-5: Comparison of Credibility category (Net Scoring)Completeness / currency / usefulness of information (Credibility category) (Maximum=99points) HealthInsite (Australia) eMedicine Consumer Health (USA)Name, logo and references of the 9 (All pages, including partner sites had 9 (All pages)institution on each document of the site them)(essential criterion)Name and title of author on each 0 (None mentioned) 0 (None mentioned)document of the site (essential criterion)Context: source of financing, 0 (None mentioned) 0 (None mentioned)independence of the author(s) (essentialcriterion)Conflict of interest (important criterion) 0 (None mentioned) 0 (None mentioned)Influence, bias (important criterion) 0 (None mentioned) 3 (Mentioned partly in Disclaimer)Updating: currency information of the 9 6 (some pages mentioned it)site (essential criterion) including:- date of creation Yes Yes- date of last update / last version Yes NoRelevance/utility (essential criterion) 9 (For public information) 8 (For public + healthcare professionals)Editorial review process (essential 9 (Mentioned) 9 (Mentioned)criterion)Webmastering process (important 1 (Mentioned in one partner site) 0 (Not mentioned anywhere)criterion)Scientific review process (important 6 (‘Reviews of Evidence for 0 (Not mentioned)criterion) Treatments’)Target/purpose of the web site; access to 6 (Free access to all pages) 4 (Site had a ‘Registration’ link; generalthe site (free or not, reserved or not) public info could be freely accessed;(important criterion) sponsored links present)Quality of the language and/or translation 6 (good language; other language options 3 (good language; no other language(important criterion) provided) options)Use of metadata (essential criterion) 0 10 (ICRA label v02) SUBTOTAL (%of 99) 55 (55.5%) 52 (52.5%) Appendix-Table-6: User interface / Ease of finding information / Usability (Net Scoring) HealthInsite (Australia) eMedicine Consumer Health (USA)Hyperlinks category (Maximum=45points) (Minus 6points for NA parameter; so maximum=39points)Selection (essential criterion) 9 9Architecture (important criterion) 6 4 (Hyperlinks were a bit cluttered)MSc Healthcare Informatics RCSEd+Univ of Bath; Unit 5-Remote Healthcare; 2005. Tutor: MNK Boulos; Student: Sanjoy Sanyal 22
  23. 23. Content (essential criterion) 9 9Web Impact Factor: Back-links 4 (36,000 results from AltaVista) 4 (56,900 results from AltaVista)(important criterion)Regular verification that hyper-links are 0 (not mentioned, though no broken links 0 (not mentioned, though no broken linksfunctioning, i.e., no broken links were encountered) were encountered)(important criterion)In case of modification of the site NA (Not applicable) NAstructure, link between old and newHTML documents (important criterion)Distinction between internal and external 3 (Specified) 3 (There were no separate hyperlinks)hyper-links (minor criterion) SUBTOTAL (%of 39) 31 (79.5%) 29 (74.4%)Design category (Maximum=21points)Design of the site (essential criterion) 9 (Neat and trim, user-friendly) 7 (Somewhat cluttered, likely to be confusing to some)Readability of the text, (important 3 (See Readability scores) 4 (See Readability scores)criterion)Quality of the print (important criterion) 3 [Combination of Tahoma (font 9, 9.5, 5 [Only Times New Roman (font 12) for 10, 11.5), Verdana (font 7, 9.5, 12), Ariel headings and Verdana (font 7.5) for text] (font 10)]SUBTOTAL (%of 21) 15 (71.4%) 16 (76.2%)Accessibility category (Maximum=12points)Accessibility from the main search 6 (Dual mode of access – from search 6 (Same arguments apply)engines and catalogues (important box and from A-Z site map; latter gavecriterion) more logical arrangement of topics. Search engine gave results according to relevance ranking)Intuitive address of a site (important 0 (Not present) 0criterion) SUBTOTAL (% of 12) 6 (50%) 6 (50%)Interactivity category (Maximum=18points)Feedback mechanism: Email of author on 5 (‘Feedback’/‘Contact us’ links in main 5 (‘Contact us’ links in all site pages; noevery document (essential criterion) pages and some partner site pages; no author or contact info) author or contact info)Forums, chat (minor criterion) 2 (‘Consumer participation’ link) 0 (None)Traceability, cookies etc (important 6 (Cookies etc specified) 6 (same points)criterion) SUBTOTAL (% of 18) 13 (72.2%) 11 (61.1%)Ethics category (Maximum=18points)Liability of the reader (essential 9 (‘Disclaimer’ link) 9 (‘Disclaimer’ link)criterion)Medical privacy (essential criterion) 9 (‘Privacy’ link) 9 (‘Privacy’ link) SUBTOTAL (% of 18) 18 (100%) 18 (100%) Appendix-Box-6: User interface design principles by Nielsen (1994)39 1. Visibility of system status: The system should keep users informed about what is going on, through appropriate timely 2. System and real world match: Follow real-world conventions, making information appear in a natural and logical order. 3. User freedom: Users need a clearly marked ‘emergency exit’ from mistakes. Support undo and redo. 4. Consistency and standards: Follow platform conventions to avoid confusion among users 5. Error prevention: Careful design prevents a problem from occurring 6. Recognition rather than recall: Make objects, actions, and options visible. Instructions for use of the system should be visible. 7. Flexibility and efficiency: Allow users to tailor frequent actions. 8. Aesthetic design: Dialogues should not contain information that is irrelevant or rarely needed. 9. Help users recognize and recover from errors: Error messages should express in plain language the problem, and a solution. 10. Help and documentation: Any such information should be easy to search, focused on the user’s task, list concrete steps to be carried out and not be too large. Usability principles by Constantine (1994)39 A. Structure Principle: Organize the user interface purposefully, that put related things together and separate unrelated things. B. Simplicity Principle: Make common tasks simple to do, communicate simply in user’s own language, provide good shortcuts. C. Visibility Principle: Keep all options and materials for a given task visible. D. Feedback Principle: Keep users informed of actions/interpretations, changes of state/condition, and errors/exceptions. E. Tolerance Principle: Be flexible and tolerant, reducing the cost of mistakes and misuse by allowing undoing and redoing while preventing errors. F. Reuse Principle: Reduce the need for users to rethink and remember by reusing internal and external components and behaviors.MSc Healthcare Informatics RCSEd+Univ of Bath; Unit 5-Remote Healthcare; 2005. Tutor: MNK Boulos; Student: Sanjoy Sanyal 23
  24. 24. Appendix-Box-7: Usability matrices for objectively measuring Web content (Morkes and Nielsen)551. Task time: Number of seconds to find answers for tasks2. Task errors: Percentage score based on the number of incorrect answers3. Memory: Recognition and recall a. Recognition memory: A percentage score based on the number of correct answers minus the number of incorrect answers to questions b. Recall memory: A percentage score based on the number of pages correctly recalled minus the number incorrectly recalled4. Sitemap time: a. Time to recall site structure: The number of seconds to draw a sitemap b. Sitemap accuracy: A percentage score based on the number of pages and connections between pages correctly identified, minus the number of pages and connections incorrectly identified5. Subjective satisfaction: Subjective satisfaction index is the mean score of four indices – Quality, Ease of use, Likeability, User effectAppendix-Box-8: Various Readability tools, formulae and software18-21,37• Dale-Chall: Original vocabulary-based formula used to assess upper elementary through secondary materials• Fry Graph: Used over a wide grade range of materials, from elementary through college and beyond• Powers-Sumner-Kearl: For assessing primary through early elementary level materials• FORCAST: Focuses on functional literacy. Used to assess non- running narrative, e.g. questionnaires, forms, tests etc• Spache: Original vocabulary-based formula widely used in assessing primary through fourth grade materials• McLaughlins SMOG (Simple Measure of Gobbledegook): Unlike any of the other formulas, SMOG predicts the grade level required for 100% comprehension• Cloze procedure: The "cloze" procedure (from the word ‘closure’) for testing writing is often treated as a readability test because a formula exists for translating the data from "cloze tests" into numerical results• Lexiles Framework® software tool that measures readability in both English and Spanish.• ARI: The Automated Readability Index is typically higher than Kincaid and Coleman-Liau, but lower than Flesch• Coleman-Liau Formula usually gives a lower grade than Kincaid, ARI and Flesch when applied to technical documents.• Lix formula developed by Bjornsson from Sweden is very simple and employs a mapping table as wellMSc Healthcare Informatics RCSEd+Univ of Bath; Unit 5-Remote Healthcare; 2005. Tutor: MNK Boulos; Student: Sanjoy Sanyal 24

×