DITA and graphics: what you need to know

A DITA project is an ideal time to audit, enhance, and start managing your media assets. Like any other piece of content, your media are a valuable resource that the company can leverage.

Although much of a transition to DITA concentrates on improving the quality of your content, there are also some distinct benefits to your media as well. By media, I mean:

  • Logos (for branding/marketing)
  • Screenshots
  • Illustrations
  • Diagrams
  • Image maps (a flow graphic that shows how a process or a set of tasks connect with clickable hot spots)
  • Inline graphics for buttons, tips, notes

When you’re moving to DITA, you should be thinking about two things when it comes to media:

  • Minimizing and single sourcing
  • Introducing and maintaining best practices
Graphics you no longer need

Probably the biggest mistake when moving to DITA is to lug your extra, non-essential media around with you, just in case.

Any graphic that is being used solely for the purposes of design can be managed centrally instead of being placed in everyone’s individual folders. These include (but are not limited to):

  • Icons for tips/notes
  • Logos for the title page, header, footer
  • Horizontal rules for separation of content areas

All these graphics get applied on publish so each individual author no longer has to worry about them. Your authors no longer even see these graphics—and also no longer need to manage them. Your publishing expert still needs to manage these graphics efficiently, but at least now there’s only one graphic to manage instead of dozens or hundreds.

For example, when the branding guidelines are updated, the publishing expert simply updates the logo used in the stylesheets–replacing one graphic in one place instead of a graphic in every single title page and footer throughout your library of content.

Prune and archive

The design-related graphics are easy to throw away, but we all have extra graphics lying around. It’s not unusual for a single graphic to have 5 or more other related graphics that are hanging around just in case. For example, you may have files that are older versions, variations, and different size and quality options.

We always loathe to “lose” a graphic, but DITA migration is the perfect time to archive the older versions and variations—but keep the quality options because they’ll come in handy.

A graphic is only useful if it conveys something that words cannot. If you can explain what the graphic shows, then the graphic is usually redundant and not useful. If you can’t explain it, then the graphic is needed. Prune your content of the graphics that don’t add any value.

Formats

DITA lets you specify multiple types of formats for a single graphic so that you are always publishing the right graphic for the right format. You can easily publish different formats (such as color for ePub and grayscale for print) using DITA attributes.

All graphics are either vector or raster. The format you use will depend on the type of graphic you need and the outputs you’re publishing to. For more information about raster versus vector, see this article.

Vector graphics (specifically SVG) are usually the right choice for most technical illustrations and diagrams as long as they don’t require complex coloring (like drop shadowing and shading). They are clear, clean graphics that look professional and don’t have that “fuzzy” look on publish.

A huge added bonus is that, because they are made up of layers, you can export the text (usually in the form of callouts or labels) and translate just that text if you’re providing localized content. This saves having to edit or even re-create a graphic when localizing. Another benefit is that in HTML outputs, they also allow users to zoom without pixilation, as needed.

It’s just icing on the cake that vector graphics are smaller, more compact files with lossless data compression.

SVG is an open standard, advocated by W3C, the Web consortium that is bringing you HTML5. It might not be the only graphic format in the future, but it will definitely be a forerunner. You can create SVG files from most graphics editors, including but not limited to: Adobe Illustrator (.ai and .eps files), Microsoft Visio, InkScape, and Google.

Size

The best size for your graphics depends on the output type. PDFs and HTML have different widths and resolutions. This really does get tricky, but if you’re using the DITA OpenToolkit to publish, it’s possible to set default maximum widths (maintaining the correct aspect ratio) so that you’ll at least never overwhelm your audience with a massive graphic. Use this default maximum in combination with authors using DITA attributes to set preferred width or height (but not both).

Interactive graphics

Some illustrations are best done in 3D. The ability to manipulate, rotate, zoom and otherwise play with graphics is not just really cool, it can also let users access the information they need without overwhelming them with 20 different views/zooms of a particular object or object set.

You can also play around with something like Prezi (or better yet, Impress.js), which lets you display and connect information graphically.

Manage graphics

If your authors don’t know that a graphic exists or they can’t find it, then that graphic is a wasted resource. It’s not uncommon for an author to forget or be unable to find the graphics that they themselves created months or years before.

Every time an author re-creates a graphic that they could have re-used, they are wasting on average 5 hours. Assuming your authors are worth approximately $45/hour and that they re-create a graphic they should have re-used about 4 times per year, then that means the company is wasting $900/year/author. If you have 10 authors, that’s $9000/year that could be saved with some simple, basic management of graphics with little or no cost or effort. If you have complex graphics, double or triple that savings.

Just like topic reuse, graphics reuse is a no-brainer source of savings for your company.

They key to graphics management is metadata. File naming, even with strict enforcement, is one of those things that degrades over time. Mistakes creep in. If you’re relying primarily on file naming, then expect to lose or orphan graphics. (An orphaned graphic is one that is not referenced by any topics.)

Instead, use descriptive tags applied to each graphic so authors can search, filter, and find the graphics they’re looking for. Then make sure they search for existing graphics before creating new ones from scratch. This same metadata can also be used to let media be searchable to end users when you publish. If you have videos, you can take this one step further and provide a time-delineated list of subjects covered in the video so users can skip right to the spot they want.

Descriptive tags should also be intelligently managed so you don’t have people using slightly different tags and so that you can modify the tags when it’s needed. These tags are called a taxonomy/classification scheme and can lead to their own chaos if left unmanaged and uncontrolled. Either keep them extremely simple (fewer than 10 tags, no hierarchy), select a CCMS that allows you to manage them, or call in an expert to help you out.

Manage source files

Don’t forget to store and manage your source files for graphics in a similar way to your graphic output files themselves.

A quality CCMS makes it easy to store your source graphics with your output graphics (or vice versa), so you can easily find, for example, both your Visio source file and your eight associated .PNG files.

If your CCMS doesn’t include this functionality (or if you’re using file folders instead), the key is to use metadata that matches how you’re managing your output graphics so that your filters and searches will automatically include the source files for graphics as well.

Summary

Graphics are the least-emphasized aspect of a DITA conversion project, but it’s worth the effort to establish which graphics you need to keep, how to manage them, and how to make them findable for both authors and end users. Your graphics are valuable assets that can and should be leveraged.


I got my XML back. Now what?

If you’re new to DITA conversion projects or you’re planning on converting content soon, then you should give some thought to all the pieces that go into a successful conversion. A well thought out content strategy will help guide you through the process of conversion, but if you haven’t considered all the minutiae, this article will help fill in the blanks.
Review the conversion

Make sure that you take a look at the converted content to check for the following items:

  • You captured all content you wanted, including conditional content, variables, and document details (subtitles, document dates and identifiers, version numbers, etc.).
  • The content was chunked correctly for your needs. Were sections used where you wanted topics or vice versa?
  • Did tables convert correctly? Is all information there, including items like vertical text (as attributes)?
  • Are there any validity problems? Check validity. Publish using an XML editor like oXygen using the DITA Open Toolkit to XHTML to ensure you can publish without errors.
  • Are the file names the ones you want to use going forward? Is the structure the one you want to keep? If you make changes, make sure you modify all references as well or you’ll have broken paths. If you are using a quality CCMS, consider making these changes after you have uploaded your files (no path changes necessary).
  • Did you leave the chaff behind? If you still have any content that is redundant, useless, or outdated, remove it now. (Tip: Ideally, you should do the bulk of this work before you convert.)

Don’t assume anything. Having eyes on your converted content (all of it) is going to save you endless frustration down the road. Even the best conversion doesn’t always result in exactly what you need or want.

Gather metrics

If you haven’t already, make sure you have “before” metrics using your legacy tools and processes for how long it took to author new content, update content with changes, fix bugs, review content (peer review, tech editor review, SME review, QA review), translate content, and publish content. Remember that your legacy metrics will likely be based on books or chapters and that your new metrics will be based on topics (and maps for publishing). This means you need to have an idea of how many “topics” would have made up a chapter or book in legacy content so that you aren’t stuck comparing apples to oranges. Start keeping metrics on the items below as well to get an idea of how long it takes to implement your re-use strategy, for example.

Identify and fix content that needs work

It is an inevitable certainty that the content you convert will need some work once you get it back. Although you can and should do the bulk of your re-writing before you convert, once you see content in topics, you might realize that you have some key areas to clean up. Concentrate on the big ones, including:

  • Ensure you have the right topic type for the content. If your content is a procedure, then it should be in task topic. If it is meant for quick look up (tables, lists, alphabetized or organized), then it should be a reference topic. If the content explains what something is, how it works, or why the user should care about it, then it’s a concept topic. Those are broad distinctions, but make sure the content fits the topic type.
  • Do your tasks focus on user goals and is your content minimal? If your tasks cover functionality (using widget A, customizing widget B, etc.), then you likely suffer from badly focussed content.  One of the biggest benefits of DITA is topic-based writing and minimalism, which both enforce writing content that users actually need to get their daily work done. Take the time now to at minimum identify what needs work and ideally re-write the best candidates for improvements. It’s important to do this part before you work on your re-use strategy because it will give you all sorts of ideas on how you can re-use content.
Load into a CCMS (optional but usually recommended)

Even with only one writer and no translations, if you have a decent amount of re-use and any workflows to adhere to, the ROI on a CCMS (Component Content Management System) can be less than 3 years. The CCMS pays for itself in increased efficiency in a thousand different ways. The ROI is much shorter if you have multiple authors, standard amounts of re-use, and translate to one or more languages.  You should load your content into your CCMS before working on it more profoundly because once it is in there, it’s easier to find and update content, apply re-use strategies like keys, conditions, and conrefs, and generally work with your content. It also gives you the opportunity to get to know the ins and outs of the CCMS. If you haven’t yet purchased a CCMS but you are shopping around for one, you can use your newly converted content as part of a demo or trial. Simply ask the vendor to include your content. Although most CCMSs are intelligent enough for you to point to a ditamap and grab all associated files, there are always some files that are not referenced in your map but need to be uploaded, managed, and versioned, including but not limited to:

  • Source files for graphics (Visio, Photoshop, SnagIt, etc.)
  • Legacy materials in PDF, HTML, etc. if you want to keep copies of them
  • Videos (including source files and supporting files)
  • Graphics that may not be used directly but that you want to keep because they are valuable assets and may be used in the future (not necessarily in the documentation)
  • Engineering documents
  • Strategy documents
  • Anything else that needs to be accessed by multiple authors, versioned, and/or never lost

CCMSs will store these as BLOBs (Binary Large Objects), so make sure you add the appropriate metadata to these files to make sure they are findable and filterable by authors, editors, and managers. 

Apply re-use strategy

Your re-use strategy could be something simple like inserting keys and keyrefs to automatically pull in glossary terms or something more complex like using conkeyrefs to pull in conditional elements as needed. At a minimum, you’ll probably want to use conrefs for frequently repeated content, like software/hardware states, warnings or cautions, content that must be standardized across most or all documents, menu options, definitions, and anything that you’d like to update in one place and use wherever needed.

Your re-use strategy should be planned out in advanced as part of your larger content strategy, but once you receive content back from conversion, you need to actually implement your re-use strategy. The more re-use you can get out of your content, the better it is in the long term because it means huge savings in updating content, reviewing content, and translating content, not to mention confidence that you will never have contradictory information. Keep in mind, though, that extensive re-use takes time to implement—it’s a huge cost-saving step that, yes, costs some initial effort but will directly impact and improve your ROI by months if not years. So put the effort in to applying your re-use strategy before authors start working on content. 

Apply metadata strategy

Metadata is like putting strings on your XML elements so you can make them dance. Like your re-use strategy, your metadata strategy is primarily planned out as part of your content strategy. Some metadata will be applied automatically during conversion (like conditional attributes) if it existed in the content submitted for conversion, but you’ll also need to introduce some new metadata. Metadata lets you introduce and manage:

  • Conditional content to provide custom outputs to users on specific platforms, with specific expertise, or with certain combinations of products.
  • Publishing controls like vertical text for table heading or customizing the content for mobile outputs.
  • SEO keywords to improve findability of topics by search engines.
  • Topic information like author, product, embedded help ID, version, topic status (like draft, in review, final, published, archived), content contributors, etc. If you need to track it, then use metadata.
  • Taxonomy so users can browse by subject.
  • Categories/keywords for media, including graphics and videos (this helps make them findable by authors too).
  • Pretty much anything else you can imagine that is in addition to rather than strictly inside the readable content.
Work on publishing

A depressing number of people converting for the first time are dismayed and shocked to find out that the publishing aspect of a DITA project is a completely separate and additional effort over and above the conversion and (usually) the use of a CCMS. Remember that separating content from formatting (as with any conversion to XML) means that you then must put some initial effort into creating the look and feel that you want in each desired output. It’s a big one-time effort (with occasional updates thereafter) that new DITA-ites understandably often overlook. Whether you publish using the DITA Open Toolkit, a publishing tool/engine like Adobe’s Publishing Server, a third-party website like MindTouch or Fluid Topics, a CCMS add-on, or a homegrown solution, the publishing aspect of a DITA conversion project takes planning, effort, and testing. As with most projects, the more planning and testing you do, the less work it will be. There are many aspects to publishing, but at a high level, it can be split into two distinct parts:

  • Converting XML to the output formats you need (HTML, ePub, PDF, etc.)
  • Delivering the content to the users (how will users access and experience the content?)

The second bullet could include a lot of work including building a website, designing a search algorithm, designing the interface, introducing ratings and commenting systems, integrating with Support or Knowledge Bases, implementing user access, allowing users to submit their own content, designing support for adaptive content, adding accessibility features, and much more. This is the single most forgotten aspect of the content lifecycle and arguably, one of the most important ones. Throwing a PDF or tri-plane help at your users is no longer meeting user demands, so we need to step up and design the entire content experience.

Conclusion

Although conversion feels like the end of a DITA project, there’s a lot of work to be done once you get your XML. Your content strategy will be your guide throughout this process and can help you avoid making costly errors or forgetting to plan for important parts of the project.


Converting to DITA – mastering the task

Adopting DITA means you need to make a switch from document or chapter-based writing to topic-based writing. For writers being exposed to DITA for the first time, this shift in thinking and writing tends to be the hardest part of the transition.

At the core of topic-based writing is the DITA task. Master the task and you start mastering DITA content (or any topic-based content). Concepts and references are important too, but once you have mastered the task, everything else just falls into line.

Your DITA task will be the core of your content. The task topic is your primary way of instructing your users and guiding them through their relationship with the product—from setup to advanced configurations, tasks are going to be the most frequently read topics. If you identify the right tasks to document and document those tasks in a usable way, your documentation will be valuable and usable and your user will be happy with their product. Happy users are always good for business.

If you are working with legacy content, knowing the model and purpose of the DITA task will help you during your conversion. If you have content that doesn’t map one-to-one with the elements of a DITA task, then you’ll know that you have some pre-conversion work to do.

Purpose of a Task

The purpose of a task is to tell a user how to do something. From logging in for the first time to configuring an advanced combination of features, the task walks the user through the steps and provides important contextual information as well.

The task is intended to be streamlined, easy to read, and easy to follow. To get your task down to this minimal, usable core of material, you need to provide just the information that the user needs to complete the task and nothing more.

A well-orchestrated task has the right information in the right location—and nothing extra.

Focus

How long should a task be? It needs to be long enough to stand on its own but short enough that the user won’t give up partway through. Ideally, to be the most useful, a task should be no more than ¾ of a “page” in length (note that page lengths differ—a page is considerably shorter for mobile outputs, for example). However, there are valid use cases for both a one-step task and a 15-step task, so task length really depends on the content.

I think the better question here is not about length, but about focus: What should the focus of my task be? If I’m writing about logging in to the system, do I include every way to log in and as every type of user?

The answer is usually no. The more focussed your task is, the more usable it is. Place yourself in the users’ shoes and ask yourself what they need to know. Logging in to the system becomes a whole set of tasks (from which you can re-use steps extensively through the conref mechanism and/or use conditional metadata to make writing and updating faster and easier):

  • Log in for the first time (for admin)
  • Log in for the first time (for user)
  • Log in from a mobile device (if different)
  • Reset your lost password (for admin)
  • Reset your lost password (for user)
  • Log in as a special user
  • Etc.

This type of focus is invaluable to your end users. However, it’s the type of focus that is difficult to correctly identify without doing user testing and getting ongoing user feedback. If your company doesn’t provide an opportunity to get direct feedback from your users, you are relegated to either guessing how users will use the product or writing feature-based content; neither is a recommended way to write.

If you find yourself documenting a task based on a GUI feature, mechanism, then you may be missing the focus that your users need. Make sure you’re identifying and writing for the business goal rather than the product functionality. A correctly focussed task often strings together many pieces of product functionality.

Instead of…
Focus the task(s) on…
Using the print feature
  • Exporting to Excel
  • Sending a PDF for review
  • Publishing for mobile devices
  • Printing a hard copy
  • Managing your printer options
  • etc.
Configuring the x threshold
  • Maximizing efficiency in a large deployment (will include configuring x threshold as well as other widgets/settings)
  • Maximizing efficiency in a medium deployment
Using the MyTube Aggregator feature
  • Creating a channel of your favourite videos
  • Growing your reputation/community
    following

Once your task is properly focussed, the question of length usually answers itself. Always keep in mind that users don’t like to read documentation, so make every task as succinct as you can.

Core Elements of a Task

Use the core elements of a task as your tools for writing a clear, clean, streamlined task that is usable and functional.

Element  Description Example
Title
  • Clearly written description of user goal for the task
Create Daily Reports
Short 
  • Complements the title
  • Used in navigation and search results
  • When combined with the title, helps users decide whether to navigate to a specific topic
  • Uses words that bridges the gap between product terminology and user terminology
Daily reports summarize the system performance in graphical format over the last 24-hours
Pre-requisites
  • What the user must complete or have at hand before starting this task
  • The line sometimes blurs between the first step and a pre-requisite; use common sense
You must have administrative rights
or
You must have configured your server for access through the cloud
Context
  • Explains why the user would perform this task, what their goals are, and places the task in a larger context
Use and customize daily reports to get a snapshot of the system’s health and identify any trending issues or problems before they become critical
Step Command
  • Tells the user what to do for each step succinctly and with no extra words
  • Covers the action they must take and no more
Log in to the command console
Step Info

(Optional)

  • Additional information about the step command that is essential for the user to know about that step, but is nonetheless not part of the action they must take
  • Can often include tips (which should be in a note element inside the info element) or special circumstances that need to be noted
  • Is a troubleshooting element for the user—if they cannot perform this step (e.g. forgot password), give them enough information here so they can move forward
  • With the next two elements, often the content that gets automatically stripped out when publishing for mobile devices
Your password was created as part of the installation process
Step Result (Optional)
  • Tied to particular step, this is the result of the user taking the step
  • Can be omitted when the result is obvious
Your daily metrics display
Step Example (Optional)
  • Tied to a particular step, this is an example of what they see or input
  • Can be omitted when it doesn’t apply
A list of metrics, areas, or a screenshot
Sub steps (Optional)
  • Set of sub-steps that walk the user through the details of a complex step
  • Can often be used when a command becomes too long (you are trying to put too much information into one step)
1.  Restart the agent

a. From Task Manager, locate and select the agent

b. Click Stop

c. Wait 30 seconds

d. Click Start

Step Choices
  • Can be used if the step can be done in different ways for different purposes
If you prefer nightly reports, enter 6:00 a.m.
Task Result (Optional)
  • The result of the user finishing the task correctly
  • Should tie back in with the short description and context
  • Can be omitted if the task result is obvious or doesn’t apply
Customized daily reports that help you identify trends and summarize progress are now available on your Central Admin interface and are available from a drop-down list for all other administrators
Task Example (Optional)
  • An example of what a correctly performed task looks like
  • A way to provide specific details without being specific in the steps
A screenshot or code example
Post-requisites
  • What the user must do after they have completed this task

 

Re-generate all reports to include your new reports in the next bulk export

Note: There is another task available called a “Machinery Task” that has more elements and more ways to organize those elements. It is appropriate for content that covers assembling and maintaining machinery. Check the DITA 1.2 (or the latest) specification for details.

What is not included in a DITA task is a place to add detailed reference material, rationalization for performing a particular step, or complex or bifurcating task steps that cover multiple scenarios (Linux and Windows, for example).

  • Detailed reference material would be written and chunked separately in its own reference topic and either placed adjacent to this task or linked to via the relationship table in the ditamap.
  • Rationalization for each step (why you perform each step) is not needed. It clutters up the step with information that is not essential. Leave it out or add an overall explanation into a concept topic instead.
  • Bifurcating tasks (usually written in long tables in legacy materials where the user is supposed to skip down to the rows that apply to them) are no longer needed in DITA. Make each scenario its own task or use conditional metadata, and/or define your re-use strategy instead.

Fig: Example of the common elements used in a task (in an XML editor with inline elements showing)

Tasks are really the core of great, usable content. The more focussed and streamlined your tasks are, the more valuable your users will find them. It’s important that you use the correct task elements for the correct type of content. Use the elements as your guide to mastering the task. If you’re working on legacy content, then use styles or formats that map to these elements to ensure your conversion to DITA is nice and easy.


LavaCon Portland 2017 | November 5-8, Portland, OR

Stilo is pleased to be one of the Silver Sponsors at this year’s LavaCon USA conference being held in Portland, Oregon from November 5-8. Why not come and visit us on booth 42 and learn more about Migrate and AuthorBridge.

LavaCon is a gathering place for content strategists, content engineers, documentation managers, and other content professionals. The 2017 content strategy conference focuses on how to build bridges: bridges between content silos, technology silos, even people silos.

Stop by our booth in the expo hall and ask us for a demo of Migrate, our cloud XML conversion service which enables technical authoring teams to convert content from source formats including FrameMaker and Word to XML DITA or AuthorBridge, our web-based XML editor which provides subject matter experts with a Guided & Fluid authoring experience without requiring any knowledge of XML or DITA.

We’re pleased to have been selected to make the following presentations at LavaCon Portland – be sure to add them to your schedule!

Tuesday, November 7, 2017 | 1:30 – 2:30pm
Collaborative Authoring for Technical Authors and SMEs
Patrick Baker, VP Development & Professional Services

Abstract | When technical authors are working in structured authoring formats such as DITA, collecting and reviewing contributions from SME team members can quickly become a difficult problem to manage. SMEs typically want to continue authoring in Word, and understandably don’t want to learn about XML. So is it just a matter of providing an intuitive, Word-like authoring environment for SMEs, or is there more to it than that?

Wednesday, November 8, 2017 | 9:00 – 10:00am
Reusing Your Reuse: How to Keep the Reuse You Have When You Move to DITA
Helen St. Denis, Conversion Services Manager

Abstract | You are moving to DITA, but you’ve already got reuse happening in your legacy format. Reuse mechanisms don’t usually match DITA’s. How can you keep the added value of content reuse when you move the content to the new format?

Find out more and register

SSP Annual Meeting 2017 | May 31-June 2, Boston, MA

Join us at the Society for Scholarly Publishing (SSP) 39th Annual Meeting at The Westin Boston Waterfront, Massachusetts from May 31 to June 2, 2017.

You will find us located on booth #407B

The theme for this year’s meeting is Striking a Balance: Embracing Change While Preserving Tradition in Scholarly Communications and will look at the ways in which we as publishers, librarians, vendors and academics manage to explore and develop new technologies, business models, and partnerships while also remaining focused on our mission to publish and distribute quality scholarly content to researchers and students.

Find out more and register

LavaCon Europe 2017 | May 22-24, Dublin, Ireland

Stilo is pleased to be one of the Silver Sponsors at this year’s LavaCon Europe conference being held at the Croke Park Conference Center in Dublin, Ireland from May 22-24.

LavaCon is a gathering place for content strategists, content engineers, documentation managers, and other content professionals.

Stop by our booth in the expo hall and ask us for a demo of Migrate, our cloud XML conversion service which enables technical authoring teams to convert content from source formats including FrameMaker and Word to XML DITA and AuthorBridge, our web-based XML editor which provides subject matter experts with a Guided & Fluid authoring experience without requiring any knowledge of XML or DITA.

Find out more and register

Posting of Annual Report, Notice of AGM and Proxy Form

25 April 2017

Stilo International plc (the “Company”), the AIM quoted software and cloud services company, announces that its 2017 Annual General Meeting will be held at the offices of RSM UK Audit LLP, 25 Farringdon Street, London EC4A 4AB at 11.30 a.m. on Thursday 18 May 2017 (“AGM”).

In connection with this meeting, the Annual Report of the Company for the year ended 31 December 2016 (“Annual Report”) has been posted to shareholders who have elected to receive hard copies of the Annual Report. The Notice of the AGM, and its associated Proxy Form, are both included within the Annual Report.

The Annual Report of the Company (including the Notice of AGM, and its associated Proxy Form) are also available to view and download from the Company’s website at www.stilo.com

ENQUIRIES

Stilo International plc
Les Burnham, Chief Executive
T +44 1793 441 444

SPARK Advisory Partners Limited (Nominated Adviser)
Neil Baldwin T +44 203 368 3554
Mark Brady  T +44 203 368 3551

SI Capital (Broker)
Andy Thacker
Nick Emerson
T +44 1483 413500


DITA conversion and metadata

One of the most overlooked aspects of DITA conversion is including metadata in your conversion project. Metadata is a powerful tool. Please, leverage it! (Go ahead and picture me shouting this from the rooftops.)

Your goal is to capture and transfer metadata that is important to your content and your processes. You want to do this for a few reasons:

  1. So that it’s not forgotten and left behind. “When was this content last updated and who updated it?” You don’t want the answer to be: “Who knows. We converted it last week.”
  2. So you can leverage your XML. Adding metadata to XML is like putting a steering wheel on your car—it gives you all sorts of control over it.
  3. So you don’t have to apply metadata manually after conversion, a painful and time-consuming exercise.

You can also treat your conversion project as an opportunity to introduce new metadata into your content that can really enhance its value. The moment when content is being converted to XML but is not yet loaded into a CMS is the perfect moment for adding metadata.

Part of your overall content strategy should include a section on metadata strategy, where you plan what kinds of information you want to capture (or introduce) and how you will do so.

Metadata explained

Metadata is simply information about information. The date stamp on a file, for example, is metadata about that file. Although we’re used to seeing all sorts of metadata, we rarely use it to our benefit other than by sorting a list of files. Using Windows 7, you could, for example, easily return a list of all graphics that you’ve ever uploaded to your computer that were taken with a specific lens length, no matter where they are stored. You could do the equivalent exercise with your content files (Word documents, FrameMaker files, Excel spreadsheets, etc.) if you took the time to tag them with simple category metadata.

In the context of DITA topics and maps, metadata is information that is not part of the content itself. Metadata is expressed in an element’s attributes and values, in elements in the prolog of a topic, in the topicmeta element in maps, in various other places in maps and bookmaps, and in subject scheme maps.

Metadata in the prolog element

Use metadata for different purposes:

  1. Internal processes. For example, knowing the last date a piece of content was updated can let you know that content has become stale. This sort of metadata can also drive workflows for authoring, reviewing, and translating.
  2. Conditional content. Metadata is what lets you show/hide content that is specific to particular users, specific output types (like mobile), or particular products and helps you maximize your ability to single source and re-use content (thus making your ROI that much more attractive).
  3. To control the look and feel of your content on publish. Metadata allows information to pass to your publishing engine.
  4. Grouping and finding content using a taxonomy or subject scheme. Useful for both authors searching for content and end users searching and browsing for content, this strategy can be a really powerful addition to your content.
  5. To run metrics against. Example: Return a count of topics covering a subject matter, or the number of topics updated in the last x months by author a, b, and c. You can get metrics on any metadata you plan for and implement.

What metadata do you need to capture?

The metadata you need to capture depends on your content strategy. A good method is to start with how you’d like your users (external stakeholders) to experience their content and work backwards from there. For example, if you want localized content to display for users who are from a specific geographic location, then you need to build that in. If you want content to display differently for mobile devices, then you need to build that in.

Don’t forget about your authors when it comes to planning your metadata (it helps to think of them as internal stakeholders). Metadata can introduce some major efficiencies when planning, finding, authoring, and publishing content. A good CMS lets authors browse, search, and filter by subject matter, keyword, component, sub-component, or any other piece of metadata. Sometimes some of the metadata might be applied in the CMS itself rather than in the topics or map, so your metadata plan should include an understanding of what and how you’ll be able to leverage metadata using your CMS of choice.

However, at a minimum, think about including topic-level metadata (traditionally placed in the prolog element) that includes:

  • Author
  • Status of the content (for example, approved)
  • Date content was originally created
  • Date content was last updated
  • Version of product (if applicable)

Conditional metadata

Conditional metadata is the most popular use of metadata. The conditional markers on your legacy content should be converted to attributes and their values so you can leverage profiling (publishing for specific users or output types). Not all attributes can work as profiling attributes, so make sure you do your homework when planning your metadata strategy. Also not all attributes are available on all elements.

Conditional metadata on a step element

The .ditaval file goes hand in hand with conditional metadata. This is a processing file used on publish to show/hide attribute/value pairs.

Ditaval file

Publishing metadata

You can use metadata to control the look and feel of your content. A simple example is for table header columns that should have vertical text rather than horizontal text. A piece of metadata can let the stylesheets identify when to display text with vertical alignment.

Table with metadata that indicates some text should be vertical

Best Practices

I’m the first one to admit that managing your metadata can become a bit of a nightmare.  You need to keep an eye on best practices to make sure what you implement is scalable and manageable.

When you think metadata, think map

There are no two ways about it—trying to manage metadata at the topic level is not always efficient. Instead, think about putting some metadata in maps instead.  This lets you change the metadata of a topic depending on the map it is referenced in, making it more versatile.

However, there are downsides to placing metadata in maps. It means you have to duplicate effort because every time you reference the same topic, you must specify the metadata again in each map, which could lead to inconsistencies. It also means that authors can’t necessarily easily see the metadata that might be important for them to know when using or modifying the topic.

Often, some metadata at the map level lets you leverage your content intelligently while the rest should stay in the topic. Each case is unique and you should define this as part of your content strategy, but some examples are shown below.

Keep in mind that metadata that is assigned in DITA topics can be supplemented or overridden by metadata that is assigned in a DITA map, so you can overlap metadata if needed but the map is (usually) boss. For details, see the DITA specification.

Map metadata using the topicmeta element

Keys and conkeyrefs

Some great alternatives to setting conditional or profiling attributes on elements are to use keys and conkeyrefs. These mechanisms take the control out of the topic and put it in the map or in a central location, where it belongs. When you start controlling your content from your map or from a central location, your content becomes both more versatile and more efficiently updated. For example, a topic could swap out some of its content depending on the map in which it is referenced. This can be useful for anything from a term or variable phrase to a table, graphic, or paragraph.

Use of keyref in a sentence

Defining key in map, where the keyword will replace the keyref in the paragraph above

Taxonomy/Subject Scheme

Using the subject scheme map, you can take your metadata to a whole new level. The subject scheme map is a way of introducing hierarchy into your classification or subject scheme, and then being able to leverage that hierarchy intelligently on publish. For example, you can create a subject scheme that defines two types of subjects: hardware and software. Each of these categories would be broken out into sub-categories. So hardware might include headsets, screens, and power cords. By connecting this hierarchical categorization to the topics and maps that hold your content, you can manipulate content at the lower level of categorization (for example, exclude all headsets content) or at the higher level (exclude all hardware content). It also lets you change the user experience of content for end users, so they can easily search through or browse these categories. And that’s just the beginning of what you can do with subject scheme maps.

For more information on subject scheme maps, see Joe Gelb’s presentation on this subject. Although he distinguishes metadata from taxonomy, this is really an arbitrary distinction. Think of taxonomy as a particular kind of metadata with a specific purpose.

Like any metadata effort, planning your taxonomy and subject scheme is essential. For example, identifying all installation content is probably not going to be useful to end users (who wants to see the installation topics for 40 products?) but grouping content by subcomponent could be essential. The trick is to determine what will be useful.

Summary: A careful, methodical approach to including metadata in your conversion project can help you leverage your XML in a way that can be both internally and externally powerful. Use your conversion project as a way to not only transfer your existing metadata to your XML or CMS, but to also enhance your metadata to ensure you have versatile and findable content.


JATS-Con 2017 | April 25-26, Bethesda, MD

Stilo will be attending JATS-Con 2017, April 25 & 26, in the Lister Hill Auditorium at the U.S. National Library of Medicine on the NIH Campus in Bethesda, Maryland.

JATS-Con is a conference for anyone interested in learning more about the Journal Article TAG Suite (JATS), an XML format for marking up and exchanging journal content. Conference presentations are peer-reviewed and result in a final paper that is archived.

The conference is free to attend, although pre-registration is required.

Check out the program and register

Why not meet with one of our technical team at JATS-Con 2017 and learn more about Migrate, our cloud service that enables Journal Publishers to automate the conversion of MS Word documents to JATS, or AuthorBridge, our web-based JATS editor that enables journal publishers and their authors to easily create structured content with no knowledge of XML.

Schedule a meeting

Migrating to DITA – best practices for authors to consider before converting legacy content

When first making the move to DITA, there are some very important best practices that authors should consider before converting their legacy content.

When doing any sort of conversion from one format to another, the riding principle is always GIGO (garbage in, garbage out). If your legacy content is not particularly good, then converting it to DITA will only create indifferent content in a so-so DITA markup. The result: failure.

So what authoring best practices should you consider before converting your legacy content?

1.  Topic-based writing

Your content needs to be able to split nicely into topics. The DITA model uses tasks, concepts, and references, but there are variations like super tasks, glossary, and scenarios that could be useful to keep in mind as well. However, if your content is still in chapter-like narrative form, the resulting conversion will be problematic—and your ability to leverage the advantages of DITA (like re-use) will be negatively affected. Worse still, if you have content that contains a mix of concept, task, and/or reference all in the same “chunk”, then it needs to be re-written.

2.  Minimalism

Applying minimalism to content is vital. You could do it post-conversion, of course, but you can save a lot of time by doing it pre-conversion. Minimalism has three important facets to it:

  1. Write goal-based content: Writing goal-based content means more than just planning your content around tasks (although that, too, is important). It means identifying what your users will be doing with the product and writing those tasks. This is a departure from the feature-based content we often see. It’s the difference between writing the task “Using the Fine Tune feature”, which focuses on the product, and the task “Enhancing your Audio”, which focuses on the user. They might cover similar steps, but the focus of the task is on what the user needs to do, not what the product does. When you start identifying really good user goals, you end up writing about many different features, stringing them together so the user gets the information they need to complete that goal. If you have a lot of tasks that have “if-then” types of choices, you are probably mixing different goals into one task. You can often break those out into separate and more meaningful stand-alone tasks. Once you have your goals written, then it’s time to include the conceptual and reference information to support those goals.
  2. Provide only the information users need to perform the task inside the task and write nothing extra: This means writing one way to perform a task, not all ways. It means providing troubleshooting information in line with the steps where it can be useful. It means providing step results when they are informative rather than obvious. It also means not documenting the “cancel” button.
  3. Get rid of the chaff: This includes those rather unnecessary lead-in sentences like “This chapter introduces you to …” or “this table contains information about…”. When your topic and table titles are clear and your content is well written, you can completely remove those extra words (that users, ahem, skip over anyhow).

3.  Navigation

Authors are frequently in love with cross references, especially inline cross references (example: “For more information about x, see link”). Users, however, are not so in love with them. One user recently referred to it as “falling into a spinning circle”. By the time they have followed the link from the original topic (which led them to a web of five more topics), they not only can’t find their way back to the original task, they can’t even remember what the original task was. I call it “the spaghetti mess of links”.

Links between topics should be kept to an absolute minimum and should only be inline in rare circumstances. If you need to link two topics together, the best place for that is at the side or at the bottom of the topic and often DITA will do that automatically for you. What are valid reasons for linking topics?  When the user will never be able to guess that those two topics are related, or that the two topics are not “siblings” to each other in the hierarchy of content, or when you want to introduce a sequence between topics.

There are other valid reasons to include links, but the goal here is to keep linking to a minimum so that users will find the links useful. As a result, they will also pay more attention to links when you provide them. Another goal is to minimize dependencies between topics. You should be putting all links in relationship tables, which are linking mechanisms that live in the DITA maps instead of inside the topics. By removing the links from inside the topics and putting them at a higher level, inside the map, you leave the topics dependency-free to be re-used wherever necessary.

4.  Structural monstrosities

Let’s face it: we sometimes do odd things to our content in order to organize it as best we can. The result is sometimes something too complex to convert well to DITA. One example is a table within a table. Another monstrosity is having procedures in tables for some understandable formatting and layout advantages. Clean up your monstrosities and make them pretty enough to go to the Ball. DITA will give you other ways to organize your content without that sort of complexity.

Links between topics should be kept to an absolute minimum and should only be inline in rare circumstances. If you need to link two topics together, the best place for that is at the side or at the bottom of the topic and often DITA will do that automatically for you. What are valid reasons for linking topics?  When the user will never be able to guess that those two topics are related, or that the two topics are not “siblings” to each other in the hierarchy of content, or when you want to introduce a sequence between topics.

There are other valid reasons to include links, but the goal here is to keep linking to a minimum so that users will find the links useful. As a result, they will also pay more attention to links when you provide them. Another goal is to minimize dependencies between topics. You should be putting all links in relationship tables, which are linking mechanisms that live in the DITA maps instead of inside the topics. By removing the links from inside the topics and putting them at a higher level, inside the map, you leave the topics dependency-free to be re-used wherever necessary.

5.  Structure without differentiating meaning

I have yet to see legacy content that doesn’t include formatting for something like bold, italics, or underline. They are each often applied for very different reasons though. For example, you might italicize a phrase to indicate that it’s a book name, or because it’s a first occurring term, or for emphasis.

In DITA, if you apply the <i> element to all of these different types of content, then you won’t be able to leverage the markup properly. The <cite> element is used to indicate a book or external reference name. A term might be put in the <term> element. Emphasis, on the other hand, is simply not an acceptable reason to change font weight anymore. Once you are using the right elements for the right sort of emphasis, you’ll be able to leverage the formatting for those items separately and do cool things like link your <term> elements to actual glossary descriptions so users have inline rollovers with definitions.

So get familiar with your DITA inline elements like <uicontrol>, <menucascade>, <cite>, <term>, <ph>, <dl>, and get them to work for you.

A good conversion method will be smart enough to make those distinctions for you or help you make them, but that’s a key piece of conversion functionality you should look out for.

6.  Short descriptions

If you haven’t yet encountered the DITA short description, count yourself lucky. It is the only element that legacy content often doesn’t have. DITA best practices say that each topic (every one!) should have a short description. If you’re thinking, “Big deal, I’ll just put my first sentence as the short description everywhere”, then let me stop you right there. A short description is the single hardest piece of content to write in the DITA model.

A good short description is succinct (less than two lines for sure, better to make it one). It accurately describes the topic without repeating the title, and gives users just the right information that, when they see the title+short description combination, allows them to decide whether that topic is worth navigating to or not.

In all online outputs, the title+short description is visible every time your content is in a hierarchy (parent-child). That’s why it looks odd when some topics have short descriptions and others don’t. You should either use them everywhere or use them nowhere.

The short description is a powerful interpretive tool that lets you bridge what is often required technical jargon in the title with the terms a user might be more familiar with. It often leads to the “Oh, that’s what you mean” moment for users. Wield it wisely.

Good Results and Bad Results

The first example is an “as is” conversion.

As is example conversion

This second example is the same content, but with ‘best practices’ applied.

Best practices applied

Conclusion: If you follow these best practices before, during, or after your conversion, your content will become versatile, usable, and streamlined. Excellence in, excellence out is what we should all strive for.