Fairhaven, The River

About

Recent Posts

  • There's meteorology in everything?
  • Tweaking the bar chart
  • Tracking carbs for diet management
  • Fitbit and R
  • Orgmode tables vs CSV files for R
  • Org-mode, R, and graphics
  • Org-mode dates and R dates
  • Firefox 35 fixes a highly
  • November Books Read
  • October Books Read
Subscribe to this blog's feed
Blog powered by Typepad

Archives

Categories

  • Arts (6)
  • Books (13)
  • Current Affairs (41)
  • Eco-policy (55)
  • Energy Tech (38)
  • Food and Drink (9)
  • Gift Economy (3)
  • Healthcare (46)
  • Politics (16)
  • Science (4)
  • Standards (33)
  • Travel (9)
  • Web/Tech (32)
See More

Defining Metadata

Summary:The Dublin Core work leaves out the importance of establishing an intended use as context for metadata.  Having this context then makes their level of interoperability and some of the issues around metadata storage much clearer.

Dublin Core leaves out the importance of intended use when discussing metadata.  It may be too obvious to those close to the problem. Their definition
      "Metadata is data about data>"
while correct, is insufficient.  All data is metadata from some context.  A clearer definition is:
      "Metadata is data about data, that is useful in a specific context of intended use."

Johm Moehrke's post gives good examples of the kinds of intended use that are important for medical records.

It makes sense to say that PatientID is metadata about a document in different contexts:

  • It could mean that "This document is about PatientID"
  • It could mean that "This document references PatientID", e.g., a document about a child references the mother.

You need the context of a use to understand metadata.

The context of use also explains the levels of interoperability that are otherwise left dangling by the Dublin Core.  The degree of interoperability is in the context of the intended use.  An example of the lowest level of interoperability might be a piece of metadata called "license".

At the lowest level, that word "license" is all you know about the metadata.  You can only guess about possible meanings.  You don't know the format of "license".  Maybe it is a text blob that contains legal language.  Maybe it's a URL to a document in an unknown format.  Maybe it's a UUID.  This is the lowest level of interoperability and it makes automated processing nearly impossible. But, it's an important improvement over having nothing.  There are many situations where this vague hint is sufficient information for a person to figure out what to do.

At the highest level, you find something like "diagnosticCode", with a specification that it is to be encoded as an HL7 CWE, with a value selected from the 2011 XYZ profile value set.  Now I have the semantic meaning, the format, the vocabulary, complete version information, and can perform extensive automatic processing.

It's important to separate the discussion of metadata, intended use, and degree of interoperabilty needed in early discussions defining metadata.  They are different concepts.

Another issue that is not mentioned in Dublin Core is the decision of how metadata is stored and conveyed.  This is an interface and exchange problem only.  Within any processing system you don't need agreement with others about how any data is stored or conveyed.  But metadata discussions do need to understand that when exchanging metadata there are three possible situations:

  • The metadata may be embedded in the document, and not otherwise exposed.  This means that it is only accessible to systems and people that understand the document format.  An example of this could be "patient's mother" or "KVP setting".  These are metadata for some rather specialized uses in genomics and procedure analysis.  An indexing registry for medical records is unlikely to maintain these as a separately stored metadata index.
  • The metadata might only be available as a separate item.  The hash value for a document is almost never stored as part of the document.  It's use is as a separate piece of metadata used by the privacy, security, and integrity systems.
  • The metadata might be stored both as part of the document and as a separate item.  PatientID is often stored both ways.  When using patientID as part of finding and selecting documents, it is appropriate to have separate indices for many reasons.  But when processing those documents, it is necessary to have that patientID information in context within the document.  This does lead to some considerations about consistency rules when defining how the metadata is to be used, and that is normal.

 

May 17, 2012 in Current Affairs, Healthcare, Standards | Permalink | Comments (0) | TrackBack (0)

Good standards take time

     "I want what I want, I want it now" - by Lauren Christy

Those with no insight into the process want standards now.  They don't understand what it takes to make a successful high quality standard, and their demands for "NOW" result in the proliferation of bad, duplicative, and failed standards.

Good standards take time.  Three standards that have withstood the test of time show this:

  • ASCII.  This simple characterset standard took three years of work, from 1963 to 1966 to develop.  It has been tweaked a few times since, but it is fundamentally unchanged.
  • TCP/IP.  This networking standard was first outlined on paper and funding began in 1973.  The initial operational roll-out was 1982.  So it took nine years of work.  It has been tweaked a few times since then, but it is fundamentally unchanged.
  • Fortran 77.  The Fortran effort was a split effort that began in 1966.  One split simply took the language manual for the leading IBM 7094 Fortran compiler and issued it as Fortran 66.  That took 9 months of publications and editorial scrubbing of the manual.  (This would be impossible in today's intellectual property environment.  Back then, it was easy to get IBM's permission.)  The development of the language standard took 11 years.  Fortran 77 was published early in 1978.

The DICOM addition of MPEG-4 HD video encoding is one of the simplest and fastest standard additions that I've ever experienced.  It took only 9 months.  Examining the steps and work involved may help understand why it takes time to do a good job on a standard.  More complex or controversial standards take much longer.

The stages involved were:

  • Workitem definition and approval
  • Prepare first draft for WG-06 review
  • Prepare Public Comment version for WG-06 review and publication
  • Prepare Ballot version for WG-06 review and publication
  • WG-06 review and Final Text publication

DICOM has a substantial work item gate that must be cleared before work can start.  This eliminates most of the frivolous, hobby horse, and "boil the ocean" proposals.  To get started the sponsors must:

  • Provide at least one realistic use case.  This use case will be reviewed by the medical professional societies and they must agree that it is realistic to expect it in regular medical practice.  It's important to understand that this is a professional society opinion, not an individual doctor's opinion.  This keeps out the research and personal hobby proposals.
  • Identify two vendors (not just one) that have a commitment to work on the standard and probably implement it.
  • Have a verifiable completion goal.  This is important for project management, scope management, etc.  "Issue a standard" is not a completion goal.  It must be clear how the completion goal is related to the use case and reflects the standardization need.
  • Have a realistic work plan for accomplishing the goal.  This will be reviewed by the full committee and revised based on experience with other standards work.
  • Have an identified person to act as editor.  This can't be a company or group commitment.  It has to be a person who has this job as an assignment from their boss.  (This is also a measure of the reality of the vendor commitments.)

The workitem in this case was extremely simple:  Add the MPEG-4 HD (high definition) encoding as an option for any DICOM IOD that encodes video.

DICOM already had MPEG-2 and MPEG-4 SD (standard definition).  It was obvious that HD cameras were becoming practical for endoscopy and other uses.  It was clear that the video industry was ready for widespread deployment of high definition equipment.  Multiple vendors wanted a DICOM standard so that they would have a stationary feature target.  Completion was clear and the estimated time was "under a year".  It had clearly identified staff. The expected project plan was:

  • prepare initial draft (about a month)
  • initial DICOM WG-06 review
  • one or two tcons to deal with review comments (about two months)
  • WG-06 approval for public comment issuance
  • 7 week comment period
  • one tcon to deal with public review comments (if any) (about two months)
  • WG-06 approval for ballot issuance
  • 45 day ballot period
  • one tcon to deal with ballot comments (if any) (about a month)
  • WG-06 review, final text preparation, issue standard

I'm part of WG-06, so my perspective is based on the four WG-06 interactions. 

The initial draft was fairly simple and easy.  They took the final text for the addition of MPEG-4 SD and made various textual changes to change it into an HD proposal.  Issues that were raised in the initial review:

  • There were some additional video uses in DICOM that had been added since the MPEG-4 SD was initially issued.  These needed to be added to the HD.
  • There was an open question on mandatory formats.  This needed to be resolved with a firm proposal before public comment.  Public comment can respond with corrections, but you cannot leave that kind of detail out.  In theory, the public comment version should be a complete ready to go standard.  In practice there are always problems, but you only find these through the effort of writing a complete standard and then examining how well it will work.
  • There were some editorial problems, like missing references to the MPEG standards documents.

Resolving the formats issue illustrates why standards take time.  There are two core questions that need answering:  what formats were going to be built into the cameras?  what formats were going to be reasonable to implement on all of the possible players?  This means getting the attention and feedback from the chip design teams in the consumer and professional camera divisions of non-medical companies.  This takes time.  They are busy dealing with their own product issues.

The public comment version had a firm proposal for mandatory formats based on partial feedback from the sensor makers and player vendors.  It went through WG-06 review which fixed more editorial problems and made clarity revisions.  This review identified two more issues for public comment:

  • The license and patent section was missing.  This was not viewed as a stopper issue, but clearly needed to be fixed before ballot.  These issues are not open to standards modification, and would not have a likely impact on the rest of the standard.
  • The proposed mandatory formats approach would permit a system to offer HD formats without offering SD formats.  The acceptability of this was an identified public comment issue.

The public comment feedback gives time for a repeat of the process of getting feedback from chip vendors, camera makers, player makers, and starts to get real feedback from the workstation and device vendors.  The workstation and device vendors mostly ignored the earlier stages because they plan to buy sub-components from the camera and player makers.  At this stage they start confirming with their current and candidate vendors that this standard will be acceptable.  This gets a much more complete review from all of those vendors.
       The result was:

  • Some minor revisions to the mandatory formats.  The bigger issue of allowing an HD only device was considered acceptable.  The marketplace and functional requirements would take care of that.
  • The presentation and layout of the mandatory format description needed clarification to remove some confusion.


The ballot version was prepared easily, with the inclusion of the format fixes and a section identifying the license and patent issues around MPEG-4 HD.

The ballot review cycle went very fast in WG-06.  There were a few more typos found and the ballot issued in about 15 minutes.

The ballot comments brought in some of the editorial consistency nit-pickers.  They don't waste their time on the earlier versions.  For the ballot versions you get a few of the QA nitpickers checking all the details, cross references, consistency, wording ambiguities, etc.  You also get the last comments from the chip, camera, and player vendors.  The medical device vendors send the ballot version to their vendors saying "Last chance.  This will be part of our next contract requirements.  Speak now if you will have a problem." 

The result was some feed back corrections from the editorial nit-pickers.  Even after all these reviews there were some sections that could be read more than one way.  Those familiar with the subject didn't notice the alternative incorrect readings, but the QA reviewers caught those problems.  The medical vendors got the OK from their vendors.

The approval and issuance of final text took about 15 minutes.  The editorial problems and ambiguities had been fixed based on the comments.

       This whole process took nine months.  But look at how many people had to be involved.  There were:

  • clinical staff,
  • medical professional societies,
  • medical device vendors (including product planning, product management, legal, engineering, QA, manufacturing, and purchasing departments),
  • camera and player vendors (again, all those same divisions),
  • sensor chip vendors (primarily their product planning and product management teams, although legal and engineering might have been needed to advise them). 

Working on standards is not a drop everything type of issue.  All these groups give this work the same priority as their other routine work.  This is why there are the 7 week and 45 day response periods.  These give enough time for all the handoffs and internal processes needed to get good quality feedback.

 This is for an incredibly simple seeming standard.  For novel technology and significantly complex issues, there is much more engineering involved and the internal review and feedback cycles take more work.

       (Bad movie reference: reread this imagining Denise Richards washing a Jeep.)

February 27, 2012 in Healthcare, Standards | Permalink | Comments (0) | TrackBack (0)

Standards are not enough, you also need good administrative decisions

John Halamka's blog post shows the importance of having good administrative procedures to accompany the available standards and technology. Without these, the new technology and standards do not improve patient care as they should.

I noticed several administrative decisions that made his experience much worse and likely caused some confusion. They are not driven by standards or technology.

First, they apparently failed to explain the nature of the CD that he was given. He discusses the need for a vendor neutral format that can be used by any vendor. The CD that he was given was almost certainly exactly that. The DICOM media formats and IHE PDI profile are supported by over 100 different vendors. It is widely used and vendor neutral. But they apparently failed to explain this. I can understand the staff not explaining this, but it would have cost nothing to include an explanatory document on the CD itself.

Second, they only included a Windows viewing application. I can understand the need to make a selection. There are over 100 different DICOM viewers available for Windows, Mac OS, Linux, IOS (iPhone/iPad), and Android. It can be too burdensome to provide support for all the possibilities. But why didn't they include a document explaining that there are free, open source, and commercial viewers available for all these systems? I would point MacOS users to Osirix as a starting point, and give some google hints for the others.

Leaving patients with no documentation or hints about where to get a viewer is another administrative mistake. It would cost very little to explain the alternatives in the document describing the CD.

Third, why didn't he get the CD immediately? When I went to the vet I got a DICOM CD with my cat's X-rays immediately. It was just part of the end of visit process. There is no technical reason for a substantial delay or a 9-5 policy. All that's involved is transmitting the images to a system with a CD burner and burning the CDs. This should take tens of minutes at most. It will be less if the network and CD burner are fast. If this were part of the routine process, I would expect the burner to be finished before the patient is ready to leave. The IHE profiles specify the routine process for creating CDs and DVDs. There are more than 10 vendors offering IHE compliant CD creating products, and they all exchange data without problems.

It's an administrative policy decision that I don't understand to force patients to return later with some 9-5 limit on services.

Finally, there is some confusion about whether DICOM requires a PACS system.

Back when imaging was done on film, radiologists, dentist, veterinarians, and other imaging users had filing cabinets with specialized folders and labels to keep track of all the films. The organizing and managing of the film library was a necessary part of daily operations. It could consume a huge floor space and require a large staff.

With the move to digital images, this organizing and managing of images has shifted from film cabinets in huge rooms to image management software and disk drives. It's much smaller and faster than the film libraries, but it remains a necessary part of daily operations.

What DICOM has done is standardize the interface to this image management system. Radiologists call this system "PACS", but other groups like dentists and veterinarians often just call it their image management system. It is normal in a hospital to have a PACS from one vendor, with workstations and modalities from many other vendors. This works because DICOM has standardized the PACS interfaces. The IHE actor called the "Image Manager" is found in many IHE profiles is a PACS. The profiles specify both the DICOM interactions and the HL7 interactions for the common hospital activities.

It is possible to use DICOM without having a PACS system. There are small niche uses that do not need to organize and manage their images. These are not common, but they exist and do use DICOM. In most applications you need an image management system. PACS systems are used because the applications need an image management system, not because DICOM requires it. DICOM allows you to choose the image manager vendor independently from the other vendors and still expect everything to fit together and work.

The range of needs for image management is huge. Open source image managers like dcm4che co-exist with very large expensive commercial image managers. The users can decide how much they want to do by themselves and how much they purchase. The systems can be sized appropriately to the volume of imaging that they perform. As a proof of concept, we installed and ran dcm4che on an Android phone. I do not recommend using an Android phone as the PACS for any serious imaging operations, but it worked quite well for a small number of small images. Small research applications do not need to pay the cost of the large systems needed by large hospitals.

Disclosure, I work for a PACS vendor and am involved with dcm4che.

February 02, 2012 in Healthcare, Standards | Permalink | Comments (0) | TrackBack (0)

Actual failure experience (re ATNA-Syslog)

This has to be somewhat vague for trade secret reasons, but during the mid-90's Polaroid gathered extensive data on network problems as part of a total system reliability tracking effort.  The systems were medical printing systems, and data was gathered for video capture units, print servers, and printers.  The networks were hospital networks, all with wired ethernet connections.  There were several hundred devices involved and measurements taken at a couple hundred hospitals over an extended period at these hospitals.  Some of the units were stationary, and some of the units were mobile.

The network traffic is noticably different from ATNA-Syslog, so these results would need some adjustment for that purpose.  The network traffic for printing was DICOM, "lpr", and "ftp" traffic.  The observations were made over a period of several years.  In excess of 10 million transactions were recorded.  Transactions were almost entirely print transactions, with a very few print status queries.  The observations were:

  • Network problems were entirely insignificant.
    • Stationary systems reported no network problems.  TCP/IP handled whatever happened just fine.  (I should note that a down server or printer was not considered a network problem if this was detected before the print data was sent.  The sending systems would queue their prints and wait for the server  or printer to be restored to service. Down servers or printers were considered a different kind of reliability problem.)
    • Mobile systems detected network problems at about one per 250,000 transactions.  These were all the kind of failure that I described earlier.  The application acks did help identify problems.  At a problem rate of 4 ppm we decided it was reasonable to deal with these problems by just re-printing whenever a possible problem was observed.  Four extra prints per million is an insignificant problem and not worth extensive engineering work.

ATNA Syslog is a different network use pattern, and the motivation for directed attacks is different.   Hospital network characteristics may have changed since the 1990's, and this data is not a measurement of cross enterprise data traffic.  This data is relevant in characterizing the internal characteristics of the hospital environment, and can be taken into consideration when doing reliabiliity design, FMEA, and similar analysis for ATNA product design and installation design.

January 03, 2012 in Healthcare, Standards | Permalink | Comments (1) | TrackBack (0)

Should Syslog use application acks?

This started as an ATNA-Syslog question, that I will consider in the context of some DICOM history. 

Should Syslog have an appliction ack to deal with some known vulnerabilities?  I call them vulnerabilities because they are extremely unlikely except when combined with intentional external actions.  These vulnerabilities become apparent with mobile equipment and with skilled attackers.

DICOM has used application ack since it's introduction.  But, this was for application reasons, not to deal with vulnerabilities in the underlying TCP and TLS services.

  • In C-MOVE there is a C-MOVE-REQ that says "please accept object X".  The corresponding C-MOVE-RSP says either "yes, got it" or "no, this is why not."  Most often the no is because the responder is out of storage space, but it can also be due other problems.
  • In C-FIND the pair is more complex.  C-FIND-REQ says "please provide information on objects that match these criteria".  The C-FIND-RSP normally conveys the information requested.  It can also deal with "and there is more to come later" or "this is everything".  The "no, this is why not" deals with invalid requests and other problems.

The use of application ack to deal with network problems as part of failure management was a side effect.  These are just another kind of failure to be reported.  The state machine for the application layer has to deal with network failures as part of completeness, not as a reason for application ack.

DICOM also tolerates a certain level of indeterminacy.  The designers basically said "close enough", much like the designers of CRC and FEC codes.  These cover most of the possible errors, and accept that a few will sneak through and be dealt with elsewhere.

The "elsewhere" in DICOM leads back towards the isues with Syslog and ATNA.  DICOM makes transactions idempotent to the maximum extent possible.

The C-MOV transaction is idempotent.  These is no end state difference between "C-MOV object X" once and "C-MOV object X" one hundred times.  The only difference is the time it takes.  So when in doubt or uncertain, DICOM applications just do it again.  The second time probably reaches a determined state.  Similarly the C-FIND is idempotent.  A look at operational DICOM logs shows well over 99.9% of the transactions performed are idempotent.

There are some necessary exceptions, like "print".  Sending "print" once is different from sending "print" a hundred times.  You get a lot more copies printed.  DICOM tries to take the non-idempotent applications and split them into an idempotent part and non-idempotent part.  DICOM puts as much of the application into the idempotent part as it can.  In the case of "print", everything except the N-ACTION-PRINT is idempotent.  This minimizes the window of vulnerability.  DICOM then took the attitude, "close enough, maybe there is the occasional duplicate print when something goes wrong.  We'll accept that.  It won't happen often enough to be a problem."
    
Syslog and ATNA have the issue that:

  • There is no application level ack, and
  • There is a delivery uncertainty in the face of some kinds of errors that occur with mobile devices and skilled attackers.

Can idempotency deal with this?  It can if syslog messages are designed properly.  This would allow a gradual transition to reliability without requiring changes to the underlying syslog protocols.  The application change would be to send extras when uncertain about delivery.  This is probably also a much smaller network impact also.  There is extra traffic only in those error situations, not during normal traffic.

This takes messages that are universally unique over all time and sources, and that are idempotent.  The idempotency allows sending duplicates.  Duplicates can be recognized and discarded by recipients.  (This also simplifies some of the multiple database and dispersed database issues for log processing.)

Unique IDs deal with uniqueness.  DICOM uses unique IDs for all sorts of things.  The hardest problem with unique IDs is persuading all the hot shot programmers that they really should read the recommendations on best practices for creating unique IDs. The home grown unique ID algorithms all seem to fall into one or another of the well known traps that result in non-unique ID generation.

If the lead-in to every syslog message body includes that unique ID for that message, you may be done.  The rest is ensuring that the generating application doesn't generate multiple messages for the same event.  That's bad design in any case.  You also want to avoid non-idempotent content.  This should be easy for ATNA and syslog.  The message is describing an event.  It shouldn't be that hard to make these messages idempotent.

The one idempotency trap that I've seen with log messages is the use of incremental messages.  These need sequence integrity.  The idempotence of meaning is lost if the messages are processed in the wrong order.  Time tags and other tools can be used to preserve sequence integrity despite repeats and out of order arrival.  But it helps a lot if the syslog messages are designed to be a self-contained complete description of the event of interest.

How well did RFC-3881 and DICOM do when defining audit messages?  So-so.  The event identification mandates identifying a source, date-time, etc.  It does not require that these be message unique. The messages are fully self-contained.  What they need is a unique ID.

I also checked the MITRE CEE effort.  They don't mandate idempotent messages either.


So, should I propose a change to add an optional unique ID?  Something to think about.

January 01, 2012 in Healthcare, Standards | Permalink | Comments (0) | TrackBack (0)

Standards and drug costs

In honor of Dr. Meier's recent death, a comment on how standards choices affect drug costs.  Dr. Mieir was the major advocate for the use of randomized trials.  Standards can reduce the costs of these trials when used properly.  Some of the results are counterintuitive, but that's common when using statistics.

The major cost of developing a new drug is the Stage III trials.  These involve many patients as trial subjects, plus all the associated costs.  These costs are huge.  The two major approaches used to reduce these costs are:

  1. better early screening to reject candidates.  This reduces the number of Stage III failures.
  2. reduce the cost of individual trials.  Standards can help directly with this.

The primary cost driver for the Stage III trial is the number of subjects needed.  This number results from using randomized trials to remove the effects of variance.  This variance is composed of:

  1. Subject variance, the inherent variability of the disease and patients.  There is little that can or should be done to reduce this.
  2. Observer variance, the variability of the observation methods, calibration, recording accuracy, etc.  In an ideal world this would be zero, and standards tackle this problem.

The techniques covered by standards include:

  1. recording all the relevant details about observation methods.  DICOM enables capturing the important observational parameters.  It defines the terminology, measurement units, etc.  As observational equipment evolves and is better understood, DICOM extends these definitions.
  2. recording calibration information and making it available for subsequent use.  This was a low hanging fruit for DICOM, requiring only a minor clarification and extension to the patient identification module to incorporate calibration phantom information.  See CP-613 and CP-764 (http://www.dclunie.com/dicom-status/status.html#CorrectionProposalsByNumber) This enables subsequent trial specific calibrations for the patient results.
  3. recording and providing measurement methods.  There is significant variance that results from differences in setup, patient preparation, etc.  This often involves machine specific information for individual model types.  DICOM is working on methods to capture this and communicate it.  See Supplement 121 (http://www.dclunie.com/dicom-status/status.html) The hope is that this will enable all of the sites involved with a particular clinical trial to use the same measurement methods, and reduce variance this way.
  4. definition and distribution of standard codes and terminology.  Clear definitions of measurement meaning reduce variation between observers when reporting.  IHE has contributed a Shared Value Set (SVS) facility so that it is easy to distribute these terms and their definitions to all the staff and other participants.  Other standards efforts like SNOMED and RADLEX try to define universally useful clinical terminologies.

I emphasize variance reduction because that is where there are potentially huge savings.  The number of patients needed in a trial is proportional to the square of the variance.  If we can cut variance in half, it would cut the cost of Stage III by as much as 75%.  More realistic goal is a 10% reduction in variance that generates a 20% reduction in Stage III costs.  That would be a multi-billion dollar per year savings.  There is much naive discussion about using the network to find more subjects.  This does help, but in almost all cases the real cost driver is the large number of subjects participating in the trial.

Now for the statistical subtlety.  One potentially large source of variance is changing methods or terminology in the middle of a study.  So healthcare faces an ethical dilema.  Changing methods and terminology is needed to make improvements in care, but interferes with ongoing clinical trials.  Making a change should involve the patient and the clinical trials organization as well as the healthcare provider.  The clinical trials organization needs to inform the decision makers about the impact that this change will have.  How much does it affect the trial?  The patient and provider need to assess what the effect of the change will be on the patient's prognosis and goals.

An example of a change is a new patient prep procedure that reduces CT prep time by 5 minutes.  For any patient not involved in a trial involving CT, the answer is obvious.  They should get the change.

But for a patient in a trial using CT data you need to consider whether this change will affect the trial.  If this will increase trial variance by 1%, it may increase the trial costs by 2%, reduce trial quality, or delay trial results.  The patient may well prefer to take the extra 5 minutes rather than interfere with the trial.

To make this work you need:

  • a system in place to keep track of which patients are in trials, and what procedures are affected by these trials.
  • a system to provide trial specific information for everyone conducting those procedures
  • a system of people who are prepared for the operational variety that this creates.  It's like conservation of mass.  The variations have been removed from the clinical trial and put into the day to day operation of the healthcare provider.

The IHE SVS profile helps by enabling the use of date, version, and trial tags to identify value sets that support particular trials.  DICOM Supplement 121 helps by easing distribution and implementation of consistent imaging protocols.  These are a small part of the overall effort, but the bulk of the systems described above are internal to each healthcare provider.

August 16, 2011 in Healthcare, Standards | Permalink | Comments (0) | TrackBack (0)

Internet trust lecture

Summary: A lecture by Miriam Meckel reminded me of the importance of reciprocity in healthcare relationships.

The lecture by Miriam Meckel presented results of a study on building trust on internet.  They picked ten realistic things that are part of establishing trust.  Then they examined surveys over 1 year for a variety of B2B and consumer organizations.  These were studied for primacy component analysis to see what drives trust the most.

Biggest factor is "reciprocity".  This is the agreement by both sides that their expected actions make sense and are appropriate for the nature of the relationship.  A very simple example is that customers expect to pay for goods delivered.  This reciprocity is a factor independent of contract or other terms.  It applies to later discoveries, undisclosed activity, etc. 

Reciprocity was 1/3 of the determination of trust.

Also noted was the penalty for violating reciprocity expectations.  A slimey crook who presents as a crook is not trusted to begin with.  A trusted relationship that then has a reciprocity failure is treated as a major betrayal.  The betrayer becomes much worse than an untrusted crook.  They are an enemy to society.

Three factors are responsible for the next 1/3 of establishing trust:

  • Technical reliability.  This means all aspects of the relationship work smoothly and without problems.  It's much more than just web site stabiltiy.
  • Customer control.  The more that the customer determines the relationship and activities, the greater the trust.
  • 3rd party recommendations.

So, with four reasonably well defined areas you get 2/3 of the trust establishment, and reciprocity is the dominating factor. 

This has some relevance to healthcare and its security.  The trust relationship is important to most aspects of healthcare. 

One result is that the risk assessment priorities for security analysis need reconsideration.  It's true that inappropriate disclosure is a risk. I would consider that a technical reliability problem. But, reciprocity, patient control, and 3rd party recommendations are also assets to be protected. 

This also points to a flaw in an argument that I hear often regarding data losses.  The many disclosures due to stolen laptops are discounted because the data is rarely actually disclosed.  In practice the laptop is wiped, because the thief stole it for resale.  Wiped laptops are easier to sell.  That's an argument dealing with the 10% factor of technical reliability failure.  This argument ignores the reciprocity failure, and leaves the vendor open to enemy of society treatment.  That's a big loss of an important asset.

The solution to the reciprocity failure is some mix of

  • have the customer accept that the loss is reasonable.  This is the rather unpopular "there is no privacy any more" argument.
  • make sure the customer knows that you cannot be trusted with their data.  This downgrades you from enemy of society to merely not trustworthy.
  • don't lose data on laptops

We need to add the assets of reciprocity, reliability, customer control, and 3rd party assessment to the risk analysis mix.  It's more than loss of data and data disclosure.

I've seen two other related problems in healthcare.

  1.  "Consent"s, which are important but generally bungled.  Reciprocity does not mean that you told me something would happen.  Reciprocity means that when I later learn about it I agree that it was appropriate.  Consent is only part of reciprocity to the extent that it ensures that the customer understands the other side and knows who not to trust. 
  2. The "patient control" implementations that I've seen have generally asked the patient to do the impossible.  The patient is expected to make an agreement while under extreme stress, inadequately informed, and with no time to get proper advice or more information.  Then the agreement is used to rationalize all kinds of reciprocity failures.  They would do better to deal with the reciprocity failures in most cases, and concentrate patient control on situations where the patient is not under stress, has adequate information, and the time to make an informed decision (including getting 3rd party advice).

July 15, 2011 in Current Affairs, Healthcare, Standards | Permalink | Comments (0) | TrackBack (0)

Ownership and copyright

This weekend's work finds yet another oddball use for "own".  In the world of BPEL and BPMN the "owner" of a task is the person who is currently working on it.  Another way to spread confusion.

John asked about an "XML schema" for privacy.   There are two answers to this.

  1. I think a consent structure similar to the Creative Commons copyright licenses is feasible.
  2. A general XML encoding for privacy, consent, or copyright remains a matter for research and exploration.  The translation of law and decisions into a structured form has been a matter of legal research since the mid 1970's or earlier.  It's still research.  There is also ongoing research into forms of license and contract.  None of these are ready for routine use.

A bit of history

While I think that common consents are feasible, it will take a lot of time.  The history of work that led up to the the Creative Commons licenses shows how much time it can take:

  • 1974 - The CONTN commission recommends that copyright law be extended to software.
  • 1980 - Copyright is extended to include software
  • 1988 - The Emacs General Public License is written.  This is the first effort to formalize a publicly usable license.  It is a reaction to the serious problems resulting from inadequacy of the Gosling license for early emacs work.
  • 1991 - GPLv2 issued.  This became a major influence on copyright and public domain thinking.
  • 2002 - The Creative Commons is established.  The needs for open culture, open publications, etc. are not met by the GPL and similar variants.
  • 2009 - The Creative Commons licenses reach version 3.0 (the current version).

Creative Commons is now to the point where there are about a dozen standard copyright licenses.  For a very large number of people it meets their needs.  You answer a few questions and get a recommended license.  You also get an HTML code snippet that identifies the license, and provides links for further information, attribution, etc.

This has reached the same level of ease of use as the typical publisher's copyright assignment forms, with substantially better commonality.  Every publisher, etc. has their own standard assignment forms,  with their own terms, etc.  They all have lawyers who customize things to maximum advantage of the publisher.  Common Criteria took the perspective of the authors, and after a few years experience with actual author preferences, has a set of common licenses for the open culture authors.

A Creative Commons for Consents

There have been occasional discussions of whether privacy rules could be managed like copyright.  Zittrain has written on this, and there was a recent seminar on the topic at the Berkman Center.  These make clear that the larger issue of privacy remains extremely complex.  But in a much narrower domain like patient privacy consents, there is a better chance for success.

I can see a situation where a group defines a set of patient oriented consents.  These consents would differ from much of what I've seen in current work by being patient perspective rather than provider perspective.  Rules like the HIPAA rule are impenetrable to the typical patient.  They deal with the many issues that are visible to the provider, rather than the issues that are visible to the typical patient.

I expect that getting this right will probably take a decade, given that it took two decades for the copyright licenses to evolve a public oriented set.  We can learn much from that effort, but one of the things that you learn is that real experience was crucial to the evolution of these licenses.  Real experience takes time.

June 26, 2011 in Current Affairs, Healthcare, Standards | Permalink | Comments (0) | TrackBack (0)

Discussion when using Robert's Rules

Robert's Rules are rarely invoked directly for standards meetings, but they do apply.  Most people have not studied them and are unwilling to.  One reason is lack of understanding of how to use the rules without destroying the ability to discuss issues.  (In the hands of the wrong chair and parlimentarian they are very effective at preventing necessary discussion.)  Keith has noticed that they have some real advantages.

How to manage discussion using Robert's Rules.

In order for meetings to be more than a rubber stamp railroading of work prepared by insiders there needs to be effective presentation and discussion.  This is difficult to combine with decision making.  The process is covered in the complete Robert's Rules, and has been since the initial 1876 edition.  (The latest edition is expensive to buy in computer form.  The first edition is available on project Gutenberg.  Most of the following has not changed since 1876.

The decisionmaking meeting process is full of motions, amendments, etc.  During this period speakers are subject to many restrictions.  In many official meetings a speaker may only speak twice, and each time must be on one and only one topic.

A simple CP

An example of this kind of decisionmaking is a recent CP in the IHE ITI committee.  There was a proposal to change one paragraph from applying to "Secure Nodes" by making it apply to "Secure Nodes and Secure Applications".  This is a proposal that needed no explanation.  There were only three speakers:

  • The motion proposer, who presented the precise wording on a display
  • A supporter who said, "This is an obvious mistake that needs to be fixed."
  • An amender who noticed another location in the paragraph that needed to be fixed.  ITI is not very strict about amendments.  We don't require written submission of amended motions.  Instead the proposed amendment was made to the displayed motion, and an informal agreement reached that it was an appropriate fix.

The resulting amended motion was approved.

A more complex proposal

Some proposals are more complex.  For example, consider the motion made to the DICOM Standards committee to "Approve a workitem to investigate incorporating MINT into DICOM".  Many meeting members started by wondering "what is MINT?".  This can be managed by the "report" process.  This involves a series of steps:

  • The meeting agrees to the reading of the report.  This means leaving the formal session and changing rules.
  • The report is read (which includes discussion also)
  • The report is accepted or rejected
  • The meeting resumes formal session
Reading the report

The report reading rules vary among organizations.  Most common is a formal report reading followed by a question period.  Many organizations require that the report documents be provided to the meeting members in advance. So it's a controlled lecture then Q/A approach.  There are often strict time limits.

Handling the Q/A session is an acquired skill.  The rules are:

Speakers can make an unlimited number of questions, but

  • Only one question at a time.
  • Only questions about the report are allowed, not opinions, arguments, or conclusions.
  • The chair decides whether a question is duplicate, opinion, argument, etc.

When a report states "we considered alternatives A and B" and the question is asked "what about alternative X?" a skilled report presenter will answer:  "We only considered alternatives A and B".  This closes off all discussion of all other alternatives during the report reading period.  (Persuading everyone that they can no longer bring up any other alternative is a major headache for the chair, especially with certain speakers.  This kind of speaker is sufficiently common that we can all immediately think of several examples.) 

A chair has much worse headaches with unskilled report readers.  I recall one meeting on a civic center where the question "How will this proposal affect parking and traffic?" was answered with "The state thinks a civic center like this is a major priority."  It's a completely unresponsive answer.  With unskilled report readers there needs to be a lot of painful hand holding.  In that particular case the real answer was obtained after several minutes of tooth extraction: "We did not examine parking or traffic implications of building a civic center."  Knowing the answer, you can see why they were reluctant to answer directly.

So what about the report that did not cover alternative X, or consider parking implications?

Accepting or Rejecting the report

At some point, either a time limit or absence of questions, the chair switches topics to whether the report should be accepted.  The meeting is now back in formal session, but on the subject of whether to accept the report. So the two speeches limit is back in affect.

Now is the time for proponents of alternative X speak against the motion.  They should argue that the report is not acceptable without consideration of alternative X.  It's here that emotions will run high.  Very few people realize that their report can be rejected.  In standards work there is far too much willingness to accept incomplete reports.

The most common compromise in formal meetings will be:

  • The report is accepted as a progress report rather than a full report
  • The working group is tasked to continue preparation of the full report.
Meeting resumes after the report is accepted/rejected

Now you're back in formal session, with the two speeches restriction etc.  But the topic has shifted to the original motion: "Approve MINT workitem".  So the speech limit is reset.  Common outcomes will be:

  • Approve the motion.  (This is even possible when the report is rejected, although not likely)
  • Defeat the motion. (This is more likely when the report is rejected.)
  • Table the motion.  (This is an American usage.  The British rules of order are different than Robert's Rules.  One point of extreme confusion is the definition of "table the motion".  In American rules this means an indefinite postponement of discussion until some later time.  This later time might be a few hours later, or it may be never.  In British usage "tabling the motion" means to make it the current subject for discussion.  That's the exact opposite meaning.)
  • Refer the motion to a sub-committee.  (This also tables the motion for this committee until that sub-committee comes back with a report.
  • Enter a committee meeting of the whole, quasi committee of the whole, or informal session.  This will eventually end and return to the formal session with the topic still being the original motion: "Approve MINT workitem."  (This is starting to feel like computer programming isn't it.)

Rob

When major discussion is needed

Sometimes major discussion is needed.  This can be accomodated by the committee of the whole, quasi-committee, or informal meeting. These effectively mean:

  1. A temporary subcommittee has been created, with membership of the entire original committee
  2. The original meeting is postponed and the new committee of the whole meets.
  3. The committee of the whole is only allowed to discuss the proposed motion and prepare a report.  The report may include a proposed amended version, but no decision is allowed on the motion or any amended version.
  4. The committee can manage its time, postpone and resume, etc. as needed.  (The details of rules vary between committee of the whole, quasi-committee, and informal meeting.)
  5. Unless there were special rules put in place when the committee of the whole was created, there is no limit on discussion other than a requirement that all who wish to speak be allowed to speak at least once.
  6. The committee of the whole eventually reaches a conclusion.  At this point a "Motion to rise and report" is made and approved.  Then someone presents the report.  (This is a report like any other, it has a written form, it is presented, there is Q/A, and there is a vote on whether to accept the report.  These tend to be much quicker because everyone was involved in the original work.)
  7. After the report is presented, etc. the original meeting resumes, and formal rules continue as that meeting resumes discussion of the original motion.

Referral to select committees or standing committees

More often, when substantial discussion is needed it is appropriate to refer the matter to a sub-committee.  Discussions by a very large committee are often very time consuming and much less productive than discussion in a subcommittee.  When the whole committee is only a dozen people, there is much less need to refer matters to sub-committee.  The membership of DICOM full committee is limited to full organizations, and still there are over 100 organizational members.  HL7 allows individual memberships so the full HL7 committee is in the thousands. 

Most standards organizations have a collection of standing committees.  The most common result of a proposal like "Approve MINT" is a referral to one of the standing committees to examine some issue and report back.  Select committees are special purpose committees created to deal with that one issue.  The current US term seems to be "tiger team", which sounds more egalitarian and is more ego boosting for members.

Further work

The motion itself may involve referral of work to a committee.  For example, the "Approve MINT ..." was a referral to a standing committee within DICOM.  That standing committee is expected to come back with a report, and probably a proposal to make a change.  The reporting requirements are met by the regular reports from standing committees (called working groups).  The proposed change will go through the DICOM supplement process and emerge as a ballot for the committee to approve or reject.  DICOM differs from Robert's Rules in that it permits supplement ballots to be managed by email independently of active meeting times.

HL7 balloting process differs further from Robert's Rules because it is also used as part of the discussion and consensus building process.  Robert's puts that into the reporting and discussion processes rather than the motion process.  This makes ballot failure and reconciliation a major part of the discussion and consensus process in HL7.

DICOM uses Working Group 6 and the supplement process to manage the discussion and consensus building for major changes to the standard.  This is similar to Robert's reporting process, but with reports split.  The regular reports to the DICOM full committee are summary progress reports.  The reports to Working Group 6 are highly detailed in depth reports (supplements) specifying the exact words to be added or changed, with Working Group 6 sometimes making substantial revisions to the supplements.  DICOM also requires that both the referred working group and working group 6 agree with the final report (supplement).  The final report (supplement) contains the entire recommended change text, is sent to the whole committee for official ballot approval, and generally is accepted with only editorial corrections.  The discussion and consensus building take place prior to the motion and balloting of the change.

May 22, 2011 in Healthcare, Standards | Permalink | Comments (0) | TrackBack (0)

Time to close some doors

In a recent standards tcon I was struck by the statement:

Don't close the door to innovative solutions.

I think it is long past time to start closing that door.  Standards are not research.  The successful standards are created when a group of people realize that:

  • They agree that there is a common problem (aside from minor details)
  • There are multiple well understood solutions to that problem
  • There needs to be agreement on one common solution for progress to be made.

Standards should be created after the innovation has occurred and after there is experience with that innovation.  Experience shows that there is a long period of experimentation and experience gathering between the innovation and the successful standard.  If there is still substantial innovation needed, then standardization is premature.

Fortran

This was a highly successful standard.  There were several key dates:

1958 - The successful introduction of Fortran II into operational use.  It was one of multiple, incompatible but similar languages called Fortran.

1966 - The publication of the Fortran 66 standard, more commonly known as "Fortran IV".

That's eight years between the earliest operational use of the one of the versions and the finished standard.

Ethernet

Another successful standard.  It's key dates are:

1968 - Publication of the Alohanet paper, describing the technique and successful experiments.

1973 - Initial operational use of the major contributor to the eventual ethernet standard.

1982 - Completion of the 802.3 standard for 10 Mbit/sec ethernet.

In this case it was nine years from initial operational use, and fourteen years from the initial idea.

TCP/IP

Another successful standard.  It's key dates are:

1973 - Publication of the Catenet paper, describing the internetworking technique and early experiments.

1975 - Initial operational use of the earliest internetworking protocol.

1982 - Publication and transition to TCP/IP, as a finished standard.

This went faster.  It was only nine years from initial idea, and seven years from initial operational use to the point where a standard was ready.

All of these standards continued to evolve and grow.  But the initial innovation was many years before the standardization.  The interim was spent with research, experiments, evaluations, and improvements.  The standards efforts were debates between advocates with experience and experimental results to justify their claims.  The implications of compromises and decisions could be understood.

None of these standards was perfect or complete.  Fortran continued to evolve, Ethernet has grown into a variety of faster versions, and TCP/IP continues to change.  The innovations have been gradual improvements.

April 30, 2011 in Current Affairs, Healthcare, Standards | Permalink | Comments (0) | TrackBack (0)

« | »