What’s in a DITA file?

When you first start tackling a DITA conversion, it’s difficult to get a handle on just what comprises a single file. Is it one topic? Is it multiple topics? How long should a file be? How short can it be? We’ll tackle these questions in this article.
Perfect File

The perfect DITA file is one that contains one topic, where that topic is as long or as short as it needs to be.

The length of a topic depends entirely on the subject matter and issues of usability. Generally, the litmus test is asking yourself “Can a user navigate to this one topic and have all the information they need for it to be usable and stand on its own?” If it’s too short, you’re forcing them to navigate away to find more information. If it’s too long, it becomes too onerous to follow.

Your file should contain just one topic, be it task, concept, or reference, regardless of the length of your topic.

Shortest File

The shortest file allowed in DITA is a topic that contains only a title element and nothing more.

This is absolutely allowed but it is something that you would only do for a specific reason, such as adding another level of headings.

Longest File

The longest file you should have is one that supports your requirements. For most technical publications content, you should only have one topic per file.

Note: An exception to this rule is if you’re authoring training content using the DITA Learning and Training Specialization; there are very good reasons for having many topics in a single file in that case, but those would all be learning and training topics, not the core concept, task, and reference that DITA is built upon. Note that this is true as of DITA 1.2 but may change in the not-so-distant future.

Chunking

DITA architecture allows you to nest many topics into one file. However, doing so introduces major limitations on reuse. If you nest topics into one file, you will be sacrificing the flexibility that DITA introduces. It’s like choosing to hop on one leg instead of running on both.

Breaking each topic out into its own file is what we call “chunking.” One purpose of chunking is to allow the authors to have incredible flexibility when it comes to reusing this content.

Consider this nested task within a task, where two tasks are in the same file.

 

If I want to include the information about studying with a master somewhere else, such as a guide for becoming a senator, I’ll be out of luck because I haven’t separated my two tasks into separate topics—when they’re in the same file, where one goes, the other must also follow.

Once a topic is in its own file, an author can pull that topic into any deliverable that needs it. It’s not uncommon for one file to be part of up to ten or more deliverables. If you have multiple topics in that one file, then they all must be reused along with the one you want, without the possibility of even changing the order of them.

There are many other good reasons to chunk. For example, if your content needs to be reorganized, you can quickly drag and drop topics that are each in their own files. All navigation and linking is automatically updated based on your new organization.

How It All Comes Together

Chunking content into individual topics is the first major hurdle that authors face when adopting DITA because it’s so far away from our training and understanding of writing in chapters, books, and documents. It’s not clear how all those tiny little topics come together. And be warned, you will have hundreds of topics that make up just one deliverable.

Enter the DITA map, the great glue that holds it all together.

You can think of a DITA map (which has a .ditamap file extension) as nothing but an organizational mechanism or even as a Table of Contents. A DITA map itself has very little content. It usually contains just a title. What it does have, though, are topicrefs, which are references to all those files you’ve authored.

Here’s a visual representation of a map, where I’ve told it to “pull in” my two topics (with a hierarchy, one nested below the other). When I create my map, I add the topics that are relevant to this deliverable. The map simply references them using a file path with the href attribute.

 

That same map in code view looks like this:

 

The only thing that’s actually typed into this map is the content in the element. The other two objects, the topicrefs, are pointing to the files that are my two tasks using the href value, in this case simply the names of the files: become_jedi.dita and study_jedi.dita.

When you publish a DITA map with all your topics referenced, you get a single deliverable with all your content. For PDFs, you’ll have content that looks exactly like your PDFs created from FrameMaker, Word, InDesign or whatever else you have used. For HTML output, each file becomes its own page, with automated navigation to all the other pages. All outputs are entirely customizable.

 

Summary

Although the length of your topics will vary depending on the subject matter, your files should contain just one topic. Use your DITA map to bring your topics together, giving you the flexibility that DITA promises, including topic-level reuse and the ability to quickly reorganize your content.


DITA and graphics: what you need to know

A DITA project is an ideal time to audit, enhance, and start managing your media assets. Like any other piece of content, your media are a valuable resource that the company can leverage.

Although much of a transition to DITA concentrates on improving the quality of your content, there are also some distinct benefits to your media as well. By media, I mean:

  • Logos (for branding/marketing)
  • Screenshots
  • Illustrations
  • Diagrams
  • Image maps (a flow graphic that shows how a process or a set of tasks connect with clickable hot spots)
  • Inline graphics for buttons, tips, notes

When you’re moving to DITA, you should be thinking about two things when it comes to media:

  • Minimizing and single sourcing
  • Introducing and maintaining best practices
Graphics you no longer need

Probably the biggest mistake when moving to DITA is to lug your extra, non-essential media around with you, just in case.

Any graphic that is being used solely for the purposes of design can be managed centrally instead of being placed in everyone’s individual folders. These include (but are not limited to):

  • Icons for tips/notes
  • Logos for the title page, header, footer
  • Horizontal rules for separation of content areas

All these graphics get applied on publish so each individual author no longer has to worry about them. Your authors no longer even see these graphics—and also no longer need to manage them. Your publishing expert still needs to manage these graphics efficiently, but at least now there’s only one graphic to manage instead of dozens or hundreds.

For example, when the branding guidelines are updated, the publishing expert simply updates the logo used in the stylesheets–replacing one graphic in one place instead of a graphic in every single title page and footer throughout your library of content.

Prune and archive

The design-related graphics are easy to throw away, but we all have extra graphics lying around. It’s not unusual for a single graphic to have 5 or more other related graphics that are hanging around just in case. For example, you may have files that are older versions, variations, and different size and quality options.

We always loathe to “lose” a graphic, but DITA migration is the perfect time to archive the older versions and variations—but keep the quality options because they’ll come in handy.

A graphic is only useful if it conveys something that words cannot. If you can explain what the graphic shows, then the graphic is usually redundant and not useful. If you can’t explain it, then the graphic is needed. Prune your content of the graphics that don’t add any value.

Formats

DITA lets you specify multiple types of formats for a single graphic so that you are always publishing the right graphic for the right format. You can easily publish different formats (such as color for ePub and grayscale for print) using DITA attributes.

All graphics are either vector or raster. The format you use will depend on the type of graphic you need and the outputs you’re publishing to. For more information about raster versus vector, see this article.

Vector graphics (specifically SVG) are usually the right choice for most technical illustrations and diagrams as long as they don’t require complex coloring (like drop shadowing and shading). They are clear, clean graphics that look professional and don’t have that “fuzzy” look on publish.

A huge added bonus is that, because they are made up of layers, you can export the text (usually in the form of callouts or labels) and translate just that text if you’re providing localized content. This saves having to edit or even re-create a graphic when localizing. Another benefit is that in HTML outputs, they also allow users to zoom without pixilation, as needed.

It’s just icing on the cake that vector graphics are smaller, more compact files with lossless data compression.

SVG is an open standard, advocated by W3C, the Web consortium that is bringing you HTML5. It might not be the only graphic format in the future, but it will definitely be a forerunner. You can create SVG files from most graphics editors, including but not limited to: Adobe Illustrator (.ai and .eps files), Microsoft Visio, InkScape, and Google.

Size

The best size for your graphics depends on the output type. PDFs and HTML have different widths and resolutions. This really does get tricky, but if you’re using the DITA OpenToolkit to publish, it’s possible to set default maximum widths (maintaining the correct aspect ratio) so that you’ll at least never overwhelm your audience with a massive graphic. Use this default maximum in combination with authors using DITA attributes to set preferred width or height (but not both).

Interactive graphics

Some illustrations are best done in 3D. The ability to manipulate, rotate, zoom and otherwise play with graphics is not just really cool, it can also let users access the information they need without overwhelming them with 20 different views/zooms of a particular object or object set.

You can also play around with something like Prezi (or better yet, Impress.js), which lets you display and connect information graphically.

Manage graphics

If your authors don’t know that a graphic exists or they can’t find it, then that graphic is a wasted resource. It’s not uncommon for an author to forget or be unable to find the graphics that they themselves created months or years before.

Every time an author re-creates a graphic that they could have re-used, they are wasting on average 5 hours. Assuming your authors are worth approximately $45/hour and that they re-create a graphic they should have re-used about 4 times per year, then that means the company is wasting $900/year/author. If you have 10 authors, that’s $9000/year that could be saved with some simple, basic management of graphics with little or no cost or effort. If you have complex graphics, double or triple that savings.

Just like topic reuse, graphics reuse is a no-brainer source of savings for your company.

They key to graphics management is metadata. File naming, even with strict enforcement, is one of those things that degrades over time. Mistakes creep in. If you’re relying primarily on file naming, then expect to lose or orphan graphics. (An orphaned graphic is one that is not referenced by any topics.)

Instead, use descriptive tags applied to each graphic so authors can search, filter, and find the graphics they’re looking for. Then make sure they search for existing graphics before creating new ones from scratch. This same metadata can also be used to let media be searchable to end users when you publish. If you have videos, you can take this one step further and provide a time-delineated list of subjects covered in the video so users can skip right to the spot they want.

Descriptive tags should also be intelligently managed so you don’t have people using slightly different tags and so that you can modify the tags when it’s needed. These tags are called a taxonomy/classification scheme and can lead to their own chaos if left unmanaged and uncontrolled. Either keep them extremely simple (fewer than 10 tags, no hierarchy), select a CCMS that allows you to manage them, or call in an expert to help you out.

Manage source files

Don’t forget to store and manage your source files for graphics in a similar way to your graphic output files themselves.

A quality CCMS makes it easy to store your source graphics with your output graphics (or vice versa), so you can easily find, for example, both your Visio source file and your eight associated .PNG files.

If your CCMS doesn’t include this functionality (or if you’re using file folders instead), the key is to use metadata that matches how you’re managing your output graphics so that your filters and searches will automatically include the source files for graphics as well.

Summary

Graphics are the least-emphasized aspect of a DITA conversion project, but it’s worth the effort to establish which graphics you need to keep, how to manage them, and how to make them findable for both authors and end users. Your graphics are valuable assets that can and should be leveraged.


I got my XML back. Now what?

If you’re new to DITA conversion projects or you’re planning on converting content soon, then you should give some thought to all the pieces that go into a successful conversion. A well thought out content strategy will help guide you through the process of conversion, but if you haven’t considered all the minutiae, this article will help fill in the blanks.
Review the conversion

Make sure that you take a look at the converted content to check for the following items:

  • You captured all content you wanted, including conditional content, variables, and document details (subtitles, document dates and identifiers, version numbers, etc.).
  • The content was chunked correctly for your needs. Were sections used where you wanted topics or vice versa?
  • Did tables convert correctly? Is all information there, including items like vertical text (as attributes)?
  • Are there any validity problems? Check validity. Publish using an XML editor like oXygen using the DITA Open Toolkit to XHTML to ensure you can publish without errors.
  • Are the file names the ones you want to use going forward? Is the structure the one you want to keep? If you make changes, make sure you modify all references as well or you’ll have broken paths. If you are using a quality CCMS, consider making these changes after you have uploaded your files (no path changes necessary).
  • Did you leave the chaff behind? If you still have any content that is redundant, useless, or outdated, remove it now. (Tip: Ideally, you should do the bulk of this work before you convert.)

Don’t assume anything. Having eyes on your converted content (all of it) is going to save you endless frustration down the road. Even the best conversion doesn’t always result in exactly what you need or want.

Gather metrics

If you haven’t already, make sure you have “before” metrics using your legacy tools and processes for how long it took to author new content, update content with changes, fix bugs, review content (peer review, tech editor review, SME review, QA review), translate content, and publish content. Remember that your legacy metrics will likely be based on books or chapters and that your new metrics will be based on topics (and maps for publishing). This means you need to have an idea of how many “topics” would have made up a chapter or book in legacy content so that you aren’t stuck comparing apples to oranges. Start keeping metrics on the items below as well to get an idea of how long it takes to implement your re-use strategy, for example.

Identify and fix content that needs work

It is an inevitable certainty that the content you convert will need some work once you get it back. Although you can and should do the bulk of your re-writing before you convert, once you see content in topics, you might realize that you have some key areas to clean up. Concentrate on the big ones, including:

  • Ensure you have the right topic type for the content. If your content is a procedure, then it should be in task topic. If it is meant for quick look up (tables, lists, alphabetized or organized), then it should be a reference topic. If the content explains what something is, how it works, or why the user should care about it, then it’s a concept topic. Those are broad distinctions, but make sure the content fits the topic type.
  • Do your tasks focus on user goals and is your content minimal? If your tasks cover functionality (using widget A, customizing widget B, etc.), then you likely suffer from badly focussed content.  One of the biggest benefits of DITA is topic-based writing and minimalism, which both enforce writing content that users actually need to get their daily work done. Take the time now to at minimum identify what needs work and ideally re-write the best candidates for improvements. It’s important to do this part before you work on your re-use strategy because it will give you all sorts of ideas on how you can re-use content.
Load into a CCMS (optional but usually recommended)

Even with only one writer and no translations, if you have a decent amount of re-use and any workflows to adhere to, the ROI on a CCMS (Component Content Management System) can be less than 3 years. The CCMS pays for itself in increased efficiency in a thousand different ways. The ROI is much shorter if you have multiple authors, standard amounts of re-use, and translate to one or more languages.  You should load your content into your CCMS before working on it more profoundly because once it is in there, it’s easier to find and update content, apply re-use strategies like keys, conditions, and conrefs, and generally work with your content. It also gives you the opportunity to get to know the ins and outs of the CCMS. If you haven’t yet purchased a CCMS but you are shopping around for one, you can use your newly converted content as part of a demo or trial. Simply ask the vendor to include your content. Although most CCMSs are intelligent enough for you to point to a ditamap and grab all associated files, there are always some files that are not referenced in your map but need to be uploaded, managed, and versioned, including but not limited to:

  • Source files for graphics (Visio, Photoshop, SnagIt, etc.)
  • Legacy materials in PDF, HTML, etc. if you want to keep copies of them
  • Videos (including source files and supporting files)
  • Graphics that may not be used directly but that you want to keep because they are valuable assets and may be used in the future (not necessarily in the documentation)
  • Engineering documents
  • Strategy documents
  • Anything else that needs to be accessed by multiple authors, versioned, and/or never lost

CCMSs will store these as BLOBs (Binary Large Objects), so make sure you add the appropriate metadata to these files to make sure they are findable and filterable by authors, editors, and managers. 

Apply re-use strategy

Your re-use strategy could be something simple like inserting keys and keyrefs to automatically pull in glossary terms or something more complex like using conkeyrefs to pull in conditional elements as needed. At a minimum, you’ll probably want to use conrefs for frequently repeated content, like software/hardware states, warnings or cautions, content that must be standardized across most or all documents, menu options, definitions, and anything that you’d like to update in one place and use wherever needed.

Your re-use strategy should be planned out in advanced as part of your larger content strategy, but once you receive content back from conversion, you need to actually implement your re-use strategy. The more re-use you can get out of your content, the better it is in the long term because it means huge savings in updating content, reviewing content, and translating content, not to mention confidence that you will never have contradictory information. Keep in mind, though, that extensive re-use takes time to implement—it’s a huge cost-saving step that, yes, costs some initial effort but will directly impact and improve your ROI by months if not years. So put the effort in to applying your re-use strategy before authors start working on content. 

Apply metadata strategy

Metadata is like putting strings on your XML elements so you can make them dance. Like your re-use strategy, your metadata strategy is primarily planned out as part of your content strategy. Some metadata will be applied automatically during conversion (like conditional attributes) if it existed in the content submitted for conversion, but you’ll also need to introduce some new metadata. Metadata lets you introduce and manage:

  • Conditional content to provide custom outputs to users on specific platforms, with specific expertise, or with certain combinations of products.
  • Publishing controls like vertical text for table heading or customizing the content for mobile outputs.
  • SEO keywords to improve findability of topics by search engines.
  • Topic information like author, product, embedded help ID, version, topic status (like draft, in review, final, published, archived), content contributors, etc. If you need to track it, then use metadata.
  • Taxonomy so users can browse by subject.
  • Categories/keywords for media, including graphics and videos (this helps make them findable by authors too).
  • Pretty much anything else you can imagine that is in addition to rather than strictly inside the readable content.
Work on publishing

A depressing number of people converting for the first time are dismayed and shocked to find out that the publishing aspect of a DITA project is a completely separate and additional effort over and above the conversion and (usually) the use of a CCMS. Remember that separating content from formatting (as with any conversion to XML) means that you then must put some initial effort into creating the look and feel that you want in each desired output. It’s a big one-time effort (with occasional updates thereafter) that new DITA-ites understandably often overlook. Whether you publish using the DITA Open Toolkit, a publishing tool/engine like Adobe’s Publishing Server, a third-party website like MindTouch or Fluid Topics, a CCMS add-on, or a homegrown solution, the publishing aspect of a DITA conversion project takes planning, effort, and testing. As with most projects, the more planning and testing you do, the less work it will be. There are many aspects to publishing, but at a high level, it can be split into two distinct parts:

  • Converting XML to the output formats you need (HTML, ePub, PDF, etc.)
  • Delivering the content to the users (how will users access and experience the content?)

The second bullet could include a lot of work including building a website, designing a search algorithm, designing the interface, introducing ratings and commenting systems, integrating with Support or Knowledge Bases, implementing user access, allowing users to submit their own content, designing support for adaptive content, adding accessibility features, and much more. This is the single most forgotten aspect of the content lifecycle and arguably, one of the most important ones. Throwing a PDF or tri-plane help at your users is no longer meeting user demands, so we need to step up and design the entire content experience.

Conclusion

Although conversion feels like the end of a DITA project, there’s a lot of work to be done once you get your XML. Your content strategy will be your guide throughout this process and can help you avoid making costly errors or forgetting to plan for important parts of the project.


Converting to DITA – mastering the task

Adopting DITA means you need to make a switch from document or chapter-based writing to topic-based writing. For writers being exposed to DITA for the first time, this shift in thinking and writing tends to be the hardest part of the transition.

At the core of topic-based writing is the DITA task. Master the task and you start mastering DITA content (or any topic-based content). Concepts and references are important too, but once you have mastered the task, everything else just falls into line.

Your DITA task will be the core of your content. The task topic is your primary way of instructing your users and guiding them through their relationship with the product—from setup to advanced configurations, tasks are going to be the most frequently read topics. If you identify the right tasks to document and document those tasks in a usable way, your documentation will be valuable and usable and your user will be happy with their product. Happy users are always good for business.

If you are working with legacy content, knowing the model and purpose of the DITA task will help you during your conversion. If you have content that doesn’t map one-to-one with the elements of a DITA task, then you’ll know that you have some pre-conversion work to do.

Purpose of a Task

The purpose of a task is to tell a user how to do something. From logging in for the first time to configuring an advanced combination of features, the task walks the user through the steps and provides important contextual information as well.

The task is intended to be streamlined, easy to read, and easy to follow. To get your task down to this minimal, usable core of material, you need to provide just the information that the user needs to complete the task and nothing more.

A well-orchestrated task has the right information in the right location—and nothing extra.

Focus

How long should a task be? It needs to be long enough to stand on its own but short enough that the user won’t give up partway through. Ideally, to be the most useful, a task should be no more than ¾ of a “page” in length (note that page lengths differ—a page is considerably shorter for mobile outputs, for example). However, there are valid use cases for both a one-step task and a 15-step task, so task length really depends on the content.

I think the better question here is not about length, but about focus: What should the focus of my task be? If I’m writing about logging in to the system, do I include every way to log in and as every type of user?

The answer is usually no. The more focussed your task is, the more usable it is. Place yourself in the users’ shoes and ask yourself what they need to know. Logging in to the system becomes a whole set of tasks (from which you can re-use steps extensively through the conref mechanism and/or use conditional metadata to make writing and updating faster and easier):

  • Log in for the first time (for admin)
  • Log in for the first time (for user)
  • Log in from a mobile device (if different)
  • Reset your lost password (for admin)
  • Reset your lost password (for user)
  • Log in as a special user
  • Etc.

This type of focus is invaluable to your end users. However, it’s the type of focus that is difficult to correctly identify without doing user testing and getting ongoing user feedback. If your company doesn’t provide an opportunity to get direct feedback from your users, you are relegated to either guessing how users will use the product or writing feature-based content; neither is a recommended way to write.

If you find yourself documenting a task based on a GUI feature, mechanism, then you may be missing the focus that your users need. Make sure you’re identifying and writing for the business goal rather than the product functionality. A correctly focussed task often strings together many pieces of product functionality.

Instead of…
Focus the task(s) on…
Using the print feature
  • Exporting to Excel
  • Sending a PDF for review
  • Publishing for mobile devices
  • Printing a hard copy
  • Managing your printer options
  • etc.
Configuring the x threshold
  • Maximizing efficiency in a large deployment (will include configuring x threshold as well as other widgets/settings)
  • Maximizing efficiency in a medium deployment
Using the MyTube Aggregator feature
  • Creating a channel of your favourite videos
  • Growing your reputation/community
    following

Once your task is properly focussed, the question of length usually answers itself. Always keep in mind that users don’t like to read documentation, so make every task as succinct as you can.

Core Elements of a Task

Use the core elements of a task as your tools for writing a clear, clean, streamlined task that is usable and functional.

Element  Description Example
Title
  • Clearly written description of user goal for the task
Create Daily Reports
Short 
  • Complements the title
  • Used in navigation and search results
  • When combined with the title, helps users decide whether to navigate to a specific topic
  • Uses words that bridges the gap between product terminology and user terminology
Daily reports summarize the system performance in graphical format over the last 24-hours
Pre-requisites
  • What the user must complete or have at hand before starting this task
  • The line sometimes blurs between the first step and a pre-requisite; use common sense
You must have administrative rights
or
You must have configured your server for access through the cloud
Context
  • Explains why the user would perform this task, what their goals are, and places the task in a larger context
Use and customize daily reports to get a snapshot of the system’s health and identify any trending issues or problems before they become critical
Step Command
  • Tells the user what to do for each step succinctly and with no extra words
  • Covers the action they must take and no more
Log in to the command console
Step Info

(Optional)

  • Additional information about the step command that is essential for the user to know about that step, but is nonetheless not part of the action they must take
  • Can often include tips (which should be in a note element inside the info element) or special circumstances that need to be noted
  • Is a troubleshooting element for the user—if they cannot perform this step (e.g. forgot password), give them enough information here so they can move forward
  • With the next two elements, often the content that gets automatically stripped out when publishing for mobile devices
Your password was created as part of the installation process
Step Result (Optional)
  • Tied to particular step, this is the result of the user taking the step
  • Can be omitted when the result is obvious
Your daily metrics display
Step Example (Optional)
  • Tied to a particular step, this is an example of what they see or input
  • Can be omitted when it doesn’t apply
A list of metrics, areas, or a screenshot
Sub steps (Optional)
  • Set of sub-steps that walk the user through the details of a complex step
  • Can often be used when a command becomes too long (you are trying to put too much information into one step)
1.  Restart the agent

a. From Task Manager, locate and select the agent

b. Click Stop

c. Wait 30 seconds

d. Click Start

Step Choices
  • Can be used if the step can be done in different ways for different purposes
If you prefer nightly reports, enter 6:00 a.m.
Task Result (Optional)
  • The result of the user finishing the task correctly
  • Should tie back in with the short description and context
  • Can be omitted if the task result is obvious or doesn’t apply
Customized daily reports that help you identify trends and summarize progress are now available on your Central Admin interface and are available from a drop-down list for all other administrators
Task Example (Optional)
  • An example of what a correctly performed task looks like
  • A way to provide specific details without being specific in the steps
A screenshot or code example
Post-requisites
  • What the user must do after they have completed this task

 

Re-generate all reports to include your new reports in the next bulk export

Note: There is another task available called a “Machinery Task” that has more elements and more ways to organize those elements. It is appropriate for content that covers assembling and maintaining machinery. Check the DITA 1.2 (or the latest) specification for details.

What is not included in a DITA task is a place to add detailed reference material, rationalization for performing a particular step, or complex or bifurcating task steps that cover multiple scenarios (Linux and Windows, for example).

  • Detailed reference material would be written and chunked separately in its own reference topic and either placed adjacent to this task or linked to via the relationship table in the ditamap.
  • Rationalization for each step (why you perform each step) is not needed. It clutters up the step with information that is not essential. Leave it out or add an overall explanation into a concept topic instead.
  • Bifurcating tasks (usually written in long tables in legacy materials where the user is supposed to skip down to the rows that apply to them) are no longer needed in DITA. Make each scenario its own task or use conditional metadata, and/or define your re-use strategy instead.

Fig: Example of the common elements used in a task (in an XML editor with inline elements showing)

Tasks are really the core of great, usable content. The more focussed and streamlined your tasks are, the more valuable your users will find them. It’s important that you use the correct task elements for the correct type of content. Use the elements as your guide to mastering the task. If you’re working on legacy content, then use styles or formats that map to these elements to ensure your conversion to DITA is nice and easy.


DITA conversion and metadata

One of the most overlooked aspects of DITA conversion is including metadata in your conversion project. Metadata is a powerful tool. Please, leverage it! (Go ahead and picture me shouting this from the rooftops.)

Your goal is to capture and transfer metadata that is important to your content and your processes. You want to do this for a few reasons:

  1. So that it’s not forgotten and left behind. “When was this content last updated and who updated it?” You don’t want the answer to be: “Who knows. We converted it last week.”
  2. So you can leverage your XML. Adding metadata to XML is like putting a steering wheel on your car—it gives you all sorts of control over it.
  3. So you don’t have to apply metadata manually after conversion, a painful and time-consuming exercise.

You can also treat your conversion project as an opportunity to introduce new metadata into your content that can really enhance its value. The moment when content is being converted to XML but is not yet loaded into a CMS is the perfect moment for adding metadata.

Part of your overall content strategy should include a section on metadata strategy, where you plan what kinds of information you want to capture (or introduce) and how you will do so.

Metadata explained

Metadata is simply information about information. The date stamp on a file, for example, is metadata about that file. Although we’re used to seeing all sorts of metadata, we rarely use it to our benefit other than by sorting a list of files. Using Windows 7, you could, for example, easily return a list of all graphics that you’ve ever uploaded to your computer that were taken with a specific lens length, no matter where they are stored. You could do the equivalent exercise with your content files (Word documents, FrameMaker files, Excel spreadsheets, etc.) if you took the time to tag them with simple category metadata.

In the context of DITA topics and maps, metadata is information that is not part of the content itself. Metadata is expressed in an element’s attributes and values, in elements in the prolog of a topic, in the topicmeta element in maps, in various other places in maps and bookmaps, and in subject scheme maps.

Metadata in the prolog element

Use metadata for different purposes:

  1. Internal processes. For example, knowing the last date a piece of content was updated can let you know that content has become stale. This sort of metadata can also drive workflows for authoring, reviewing, and translating.
  2. Conditional content. Metadata is what lets you show/hide content that is specific to particular users, specific output types (like mobile), or particular products and helps you maximize your ability to single source and re-use content (thus making your ROI that much more attractive).
  3. To control the look and feel of your content on publish. Metadata allows information to pass to your publishing engine.
  4. Grouping and finding content using a taxonomy or subject scheme. Useful for both authors searching for content and end users searching and browsing for content, this strategy can be a really powerful addition to your content.
  5. To run metrics against. Example: Return a count of topics covering a subject matter, or the number of topics updated in the last x months by author a, b, and c. You can get metrics on any metadata you plan for and implement.

What metadata do you need to capture?

The metadata you need to capture depends on your content strategy. A good method is to start with how you’d like your users (external stakeholders) to experience their content and work backwards from there. For example, if you want localized content to display for users who are from a specific geographic location, then you need to build that in. If you want content to display differently for mobile devices, then you need to build that in.

Don’t forget about your authors when it comes to planning your metadata (it helps to think of them as internal stakeholders). Metadata can introduce some major efficiencies when planning, finding, authoring, and publishing content. A good CMS lets authors browse, search, and filter by subject matter, keyword, component, sub-component, or any other piece of metadata. Sometimes some of the metadata might be applied in the CMS itself rather than in the topics or map, so your metadata plan should include an understanding of what and how you’ll be able to leverage metadata using your CMS of choice.

However, at a minimum, think about including topic-level metadata (traditionally placed in the prolog element) that includes:

  • Author
  • Status of the content (for example, approved)
  • Date content was originally created
  • Date content was last updated
  • Version of product (if applicable)

Conditional metadata

Conditional metadata is the most popular use of metadata. The conditional markers on your legacy content should be converted to attributes and their values so you can leverage profiling (publishing for specific users or output types). Not all attributes can work as profiling attributes, so make sure you do your homework when planning your metadata strategy. Also not all attributes are available on all elements.

Conditional metadata on a step element

The .ditaval file goes hand in hand with conditional metadata. This is a processing file used on publish to show/hide attribute/value pairs.

Ditaval file

Publishing metadata

You can use metadata to control the look and feel of your content. A simple example is for table header columns that should have vertical text rather than horizontal text. A piece of metadata can let the stylesheets identify when to display text with vertical alignment.

Table with metadata that indicates some text should be vertical

Best Practices

I’m the first one to admit that managing your metadata can become a bit of a nightmare.  You need to keep an eye on best practices to make sure what you implement is scalable and manageable.

When you think metadata, think map

There are no two ways about it—trying to manage metadata at the topic level is not always efficient. Instead, think about putting some metadata in maps instead.  This lets you change the metadata of a topic depending on the map it is referenced in, making it more versatile.

However, there are downsides to placing metadata in maps. It means you have to duplicate effort because every time you reference the same topic, you must specify the metadata again in each map, which could lead to inconsistencies. It also means that authors can’t necessarily easily see the metadata that might be important for them to know when using or modifying the topic.

Often, some metadata at the map level lets you leverage your content intelligently while the rest should stay in the topic. Each case is unique and you should define this as part of your content strategy, but some examples are shown below.

Keep in mind that metadata that is assigned in DITA topics can be supplemented or overridden by metadata that is assigned in a DITA map, so you can overlap metadata if needed but the map is (usually) boss. For details, see the DITA specification.

Map metadata using the topicmeta element

Keys and conkeyrefs

Some great alternatives to setting conditional or profiling attributes on elements are to use keys and conkeyrefs. These mechanisms take the control out of the topic and put it in the map or in a central location, where it belongs. When you start controlling your content from your map or from a central location, your content becomes both more versatile and more efficiently updated. For example, a topic could swap out some of its content depending on the map in which it is referenced. This can be useful for anything from a term or variable phrase to a table, graphic, or paragraph.

Use of keyref in a sentence

Defining key in map, where the keyword will replace the keyref in the paragraph above

Taxonomy/Subject Scheme

Using the subject scheme map, you can take your metadata to a whole new level. The subject scheme map is a way of introducing hierarchy into your classification or subject scheme, and then being able to leverage that hierarchy intelligently on publish. For example, you can create a subject scheme that defines two types of subjects: hardware and software. Each of these categories would be broken out into sub-categories. So hardware might include headsets, screens, and power cords. By connecting this hierarchical categorization to the topics and maps that hold your content, you can manipulate content at the lower level of categorization (for example, exclude all headsets content) or at the higher level (exclude all hardware content). It also lets you change the user experience of content for end users, so they can easily search through or browse these categories. And that’s just the beginning of what you can do with subject scheme maps.

For more information on subject scheme maps, see Joe Gelb’s presentation on this subject. Although he distinguishes metadata from taxonomy, this is really an arbitrary distinction. Think of taxonomy as a particular kind of metadata with a specific purpose.

Like any metadata effort, planning your taxonomy and subject scheme is essential. For example, identifying all installation content is probably not going to be useful to end users (who wants to see the installation topics for 40 products?) but grouping content by subcomponent could be essential. The trick is to determine what will be useful.

Summary: A careful, methodical approach to including metadata in your conversion project can help you leverage your XML in a way that can be both internally and externally powerful. Use your conversion project as a way to not only transfer your existing metadata to your XML or CMS, but to also enhance your metadata to ensure you have versatile and findable content.


Migrating to DITA – best practices for authors to consider before converting legacy content

When first making the move to DITA, there are some very important best practices that authors should consider before converting their legacy content.

When doing any sort of conversion from one format to another, the riding principle is always GIGO (garbage in, garbage out). If your legacy content is not particularly good, then converting it to DITA will only create indifferent content in a so-so DITA markup. The result: failure.

So what authoring best practices should you consider before converting your legacy content?

1.  Topic-based writing

Your content needs to be able to split nicely into topics. The DITA model uses tasks, concepts, and references, but there are variations like super tasks, glossary, and scenarios that could be useful to keep in mind as well. However, if your content is still in chapter-like narrative form, the resulting conversion will be problematic—and your ability to leverage the advantages of DITA (like re-use) will be negatively affected. Worse still, if you have content that contains a mix of concept, task, and/or reference all in the same “chunk”, then it needs to be re-written.

2.  Minimalism

Applying minimalism to content is vital. You could do it post-conversion, of course, but you can save a lot of time by doing it pre-conversion. Minimalism has three important facets to it:

  1. Write goal-based content: Writing goal-based content means more than just planning your content around tasks (although that, too, is important). It means identifying what your users will be doing with the product and writing those tasks. This is a departure from the feature-based content we often see. It’s the difference between writing the task “Using the Fine Tune feature”, which focuses on the product, and the task “Enhancing your Audio”, which focuses on the user. They might cover similar steps, but the focus of the task is on what the user needs to do, not what the product does. When you start identifying really good user goals, you end up writing about many different features, stringing them together so the user gets the information they need to complete that goal. If you have a lot of tasks that have “if-then” types of choices, you are probably mixing different goals into one task. You can often break those out into separate and more meaningful stand-alone tasks. Once you have your goals written, then it’s time to include the conceptual and reference information to support those goals.
  2. Provide only the information users need to perform the task inside the task and write nothing extra: This means writing one way to perform a task, not all ways. It means providing troubleshooting information in line with the steps where it can be useful. It means providing step results when they are informative rather than obvious. It also means not documenting the “cancel” button.
  3. Get rid of the chaff: This includes those rather unnecessary lead-in sentences like “This chapter introduces you to …” or “this table contains information about…”. When your topic and table titles are clear and your content is well written, you can completely remove those extra words (that users, ahem, skip over anyhow).

3.  Navigation

Authors are frequently in love with cross references, especially inline cross references (example: “For more information about x, see link”). Users, however, are not so in love with them. One user recently referred to it as “falling into a spinning circle”. By the time they have followed the link from the original topic (which led them to a web of five more topics), they not only can’t find their way back to the original task, they can’t even remember what the original task was. I call it “the spaghetti mess of links”.

Links between topics should be kept to an absolute minimum and should only be inline in rare circumstances. If you need to link two topics together, the best place for that is at the side or at the bottom of the topic and often DITA will do that automatically for you. What are valid reasons for linking topics?  When the user will never be able to guess that those two topics are related, or that the two topics are not “siblings” to each other in the hierarchy of content, or when you want to introduce a sequence between topics.

There are other valid reasons to include links, but the goal here is to keep linking to a minimum so that users will find the links useful. As a result, they will also pay more attention to links when you provide them. Another goal is to minimize dependencies between topics. You should be putting all links in relationship tables, which are linking mechanisms that live in the DITA maps instead of inside the topics. By removing the links from inside the topics and putting them at a higher level, inside the map, you leave the topics dependency-free to be re-used wherever necessary.

4.  Structural monstrosities

Let’s face it: we sometimes do odd things to our content in order to organize it as best we can. The result is sometimes something too complex to convert well to DITA. One example is a table within a table. Another monstrosity is having procedures in tables for some understandable formatting and layout advantages. Clean up your monstrosities and make them pretty enough to go to the Ball. DITA will give you other ways to organize your content without that sort of complexity.

Links between topics should be kept to an absolute minimum and should only be inline in rare circumstances. If you need to link two topics together, the best place for that is at the side or at the bottom of the topic and often DITA will do that automatically for you. What are valid reasons for linking topics?  When the user will never be able to guess that those two topics are related, or that the two topics are not “siblings” to each other in the hierarchy of content, or when you want to introduce a sequence between topics.

There are other valid reasons to include links, but the goal here is to keep linking to a minimum so that users will find the links useful. As a result, they will also pay more attention to links when you provide them. Another goal is to minimize dependencies between topics. You should be putting all links in relationship tables, which are linking mechanisms that live in the DITA maps instead of inside the topics. By removing the links from inside the topics and putting them at a higher level, inside the map, you leave the topics dependency-free to be re-used wherever necessary.

5.  Structure without differentiating meaning

I have yet to see legacy content that doesn’t include formatting for something like bold, italics, or underline. They are each often applied for very different reasons though. For example, you might italicize a phrase to indicate that it’s a book name, or because it’s a first occurring term, or for emphasis.

In DITA, if you apply the <i> element to all of these different types of content, then you won’t be able to leverage the markup properly. The <cite> element is used to indicate a book or external reference name. A term might be put in the <term> element. Emphasis, on the other hand, is simply not an acceptable reason to change font weight anymore. Once you are using the right elements for the right sort of emphasis, you’ll be able to leverage the formatting for those items separately and do cool things like link your <term> elements to actual glossary descriptions so users have inline rollovers with definitions.

So get familiar with your DITA inline elements like <uicontrol>, <menucascade>, <cite>, <term>, <ph>, <dl>, and get them to work for you.

A good conversion method will be smart enough to make those distinctions for you or help you make them, but that’s a key piece of conversion functionality you should look out for.

6.  Short descriptions

If you haven’t yet encountered the DITA short description, count yourself lucky. It is the only element that legacy content often doesn’t have. DITA best practices say that each topic (every one!) should have a short description. If you’re thinking, “Big deal, I’ll just put my first sentence as the short description everywhere”, then let me stop you right there. A short description is the single hardest piece of content to write in the DITA model.

A good short description is succinct (less than two lines for sure, better to make it one). It accurately describes the topic without repeating the title, and gives users just the right information that, when they see the title+short description combination, allows them to decide whether that topic is worth navigating to or not.

In all online outputs, the title+short description is visible every time your content is in a hierarchy (parent-child). That’s why it looks odd when some topics have short descriptions and others don’t. You should either use them everywhere or use them nowhere.

The short description is a powerful interpretive tool that lets you bridge what is often required technical jargon in the title with the terms a user might be more familiar with. It often leads to the “Oh, that’s what you mean” moment for users. Wield it wisely.

Good Results and Bad Results

The first example is an “as is” conversion.

As is example conversion

This second example is the same content, but with ‘best practices’ applied.

Best practices applied

Conclusion: If you follow these best practices before, during, or after your conversion, your content will become versatile, usable, and streamlined. Excellence in, excellence out is what we should all strive for.


Getting great DITA conversion results

Although conversion is only a small part of your DITA adoption project, it’s the part that causes even smart people to break out into hives. How does one go from a regular, flat document to a series of DITA topics and maps with the correct markup? How do you do that without losing important information? What’s the best way to do it? What tools do you use? How do you start?
Learn the basics

My first recommendation is that, no matter which conversion method you end up choosing,  get some basic DITA training first. You should understand what a topic is, the difference between topic types, how a DITA map connects content together, and the basics of attributes.

Even if you’re not doing the conversion yourself, you should have enough knowledge to recognize good results from bad. For example, you should know that if all your topics are the wrong types, the conversion needs to be re-done. One of the warning signs is if you have procedural information in a concept or generic topic with an ordered list (<ol>)—that should be a task topic, which has step and step-related elements you can leverage.

Visualize the results

There’s no way to get around it: your current content, now likely stored as chapters, books, and documents will become 100s or 1000s of topics, combined together using maps. The sheer number of objects that result from a conversion project often surprises people.

It’s important to visualize the end results of an entire document set being converted—you will have 1000s or 10,000s of files, plus graphics. However, you also need to visualize what an individual “topic” should be. Is it a chapter? Half a chapter? A few lines?

I tell my clients that it should be a “digestable” amount of content—enough for users to get their business goal completed (learn about X, perform Y, look up information on Z) but not so long that it is too big a bite and gives them heartburn. The length really does depend on the business goal but on average, if you were to print out a topic to PDF, it would be about ¾ of a page long. Of course, there are topics that are going to be shorter and longer but this average at least gives you an idea of what a “topic” might be.

If your conversion is giving you longer and fewer topics, then you have a problem with chunking. You need to either re-write into discrete topics, each with an appropriate title, or re-do your conversion to chunk at the appropriate heading level.

Clean up the content

Converting content that is minimal and matches the DITA architecture already can save you lots of time in post-conversion clean up. Apply minimalism. Remove extra words. Remove sentences that repeat titles or captions. Streamline everything. If you have 40 conditions in your FrameMaker book, it’s time to do a purge.

In general, a clean conversion means that your content maps nicely to the DITA elements without abusing those elements. This means your content already matches the DITA architecture.

This mapping is most evident in tasks, which have a very specific set of elements. For example, if you have a step command followed immediately by a result, it will not convert cleanly.

  1. Log in as an administrator. The administrator interface displays your dashboard, updated every 30 seconds.

If you convert this as is, you will get the markup:

<step><cmd> Log in as an administrator. The administrator interface displays your dashboard, updated every 30 seconds.</cmd></step>

Simply by adding a line break directly following the command, you can get a clean DITA conversion:

  1. Log in as an administrator.
    The administrator interface displays your dashboard, updated every 30 seconds.

<step><cmd> Log in as an administrator.</cmd>
<stepresult>The administrator interface displays your dashboard, updated every 30 seconds.</stepresult></step>

If you think that’s not a big or important change, then think again. Having the correct markup around the appropriate piece of content is what can distinguish DITA adoptions that succeed and those that fail—fail to provide the agility and power that is possible by having content in XML.

Having the result in a <stepresult> tag means that you can programmatically hide all step results for mobile output, for example. Or you can format that non-essential information differently. It also means that you can possibly re-use this step or topic in another place, even if the step result is different in that other context.

The cleaner, more minimal, and more task-based your source content is, the easier the conversion will be. As an added bonus, you will also end up with better quality DITA content.

Develop a Content Strategy

You should develop some sort of content strategy prior to beginning the bulk of your conversion.

Among other things, a content strategy defines the elements and attributes you will use. This helps inform your conversion.

For an example not at all at random, consider the use of the short description. It’s a powerful little element—but it’s also the one piece of content that is rarely pre-existing prior to a move to DITA. Some companies decide not to use short description elements while others decide to always use them (it’s a bit like a yes or no question). It’s a powerful little element, but mostly valuable in HTML output. If you don’t have an end-to-end strategy in place, you won’t know whether you should add a short description to every topic as part of your conversion.

I can tell you that adding and putting content in an element in every single topic after conversion is a painfully time-consuming job. It’s much faster to do this work programmatically during conversion, even if you have to go back and fill in some content later.

A content strategy helps you define what you need so you can have the DITA building blocks you need in place as a result of your conversion.

Pick a conversion method

You have some options when it comes down to actually performing a conversion from unstructured (or other-structured) content to XML.

  1. In house: Usually using FrameMaker conversion tables, this is a way for you to completely control your conversion. You won’t benefit from time-saving scripts or built-in best practices that other options can provide you with. Errors and  omissions can have a serious impact on your project budget and timelines, not to mention quality. The person doing the conversion ought to have a very good working knowledge of DITA architecture, but they can learn FrameMaker conversion tables on the fly.
  2. Consultant: A consultant works with you to identify the strengths and weaknesses of your content first so your conversion is of the highest quality. Consultants will also help you implement your content strategy and apply best practices. You won’t need expert knowledge of DITA but you will still have input and control over the results. Consultants can often meet very tight deadlines, when no one on the team has the time to convert large amounts of content.
  3. Conversion specialist company: These are companies that make a business out of converting content. They can convert custom XML to DITA and convert huge amounts of content in a short time, and have a powerful engine that can be customized to whatever you need. Although budget and pace are usually out of your hands, they get good results even from really difficult conversion projects.
  4. Stilo’s Migrate: In a class on its own, Stilo’s Migrate self-service format lets you leverage the time-saving tools and functionality (like converting images to SVG on the fly) of experts while still having full control over the pace, cost, and details of your conversion. It’s flexible enough to adapt to what you need and powerful enough to make the process fast and reliable. Remember that implementing best practices is still up to you, so the people using Migrate ought to have a very good knowledge of DITA architecture to ensure quality results. Alternatively, you can bounce your Migrate conversion results off a consultant to make sure you’re heading in the right direction.

The method you choose is going to depend on your timelines, budget, in-house expertise, volume of content, and comfort level. You can also mix and match them, getting help with your tough content and doing the easy ones yourself.

Perform a trial

Whichever conversion method you select, make sure you do a trial run with real content. Like taking a car for a test drive before you buy it, a trial run helps you make modifications to your conversion early on, raise content strategy questions you may not have considered, validate the quality of your pre-converted content, and validate metrics, budget, and timelines. If you need to modify your budget or choose another conversion method, this is your big chance.

There are very few drawbacks to performing a trial conversion—it does add some time to your schedule though, so factor that in.

The best type of content for a trial is the content you’re most confident with but which also shows some complexity in your content (conditions, text insets, index markers, etc.).

Hint: Stilo offers a free trial conversion that you should really take advantage of.


A Tale of Two Formats; Creating content with Word and DITA XML simultaneously

Presented by Catherine Long & Rich Perry | Varian Medical Systems

You decide to move to publishing with DITA XML. It all appears wonderful. You purchase a CMS, select an editor, create your information model, and share the plan with your authors. This is great! You begin to analyze documents, get a style sheet designed, prepare and schedule author training, but then problems and issues of resistance start adding up. There are so many different document types (60) and so many global authors (100) who are not tech writers. You begin to realize that unless the company can stop needing to provide service and installation for 3 to 6 months, you are not going to be able to flip the switch and move to DITA all at once.

What can you do?
Make the move at your own pace, and publish documents with Word and DITA simultaneously. Doing so provides many benefits, such as bringing the different document types into the project a few at a time and training authors with their own content already in the system.

Catherine Long and Rich Perry show you how this dual publishing helps you move through the process of moving to DITA so that the experience is easier, and perhaps cheaper, for everyone involved.

View recording (registration required)

 

Meet the presenters

Catherine Long
Catherine Long has been with Varian Medical Systems for five years. She was brought in to assist the service documentation department with authoring standards and the publishing process, as well as to lead the move to DITA XML. Her challenge is to design a system architecture and provide training for 100 SMEs who write documents as one part of their busy schedules. Her relaxation is to immerse herself in the worlds of Shakespeare, Wodehouse, Wilde, and others.

Rich Perry
Rich Perry manages a publication team responsible for processing technical servicing content at Varian Medical Systems. Over the course of his career work life, he has held various technical positions supporting military and medical devices. These experiences as an end-user of servicing procedures lit a fire in him that led to his avocation as a technical trainer, curriculum developer, and product support specialist. Rich’s team is in the process of transitioning from an unstructured Word authoring environment to DITA. While he and his team have not fully implemented DITA, Rich is ready to share his experiences as the struggle continues!


Intelligent content authoring for everyone

Presented by Mike Iantosca | IBM and Patrick Baker | Stilo

The power of intelligent content becomes possible only when that intelligence is intentionally injected, most often in the form of structure and descriptive semantics. That intelligence helps machines do their automated magic. Most often, the needed structure and semantics are added by humans during content authoring. Content management, for both content development and delivery, generally becomes more powerful, flexible, and valuable with the growth of structure and semantics–but at the price of complexity and required skills for content creators.

Structured content authoring tools have advanced, masking some of that complexity, but even so-called lightweight and tag-less authoring tools leave gaps for diverse content creators and contributors, making intelligent content component strategies for the enterprise an elusive proposition.

Join us for this Stilo DITA Knowledge Series webinar as Mike Iantosca, IBM’s Structured Content Authoring Tools Lead, lays out a multi-tier strategy for addressing the needs of organizations with diverse authoring roles and skills, from professional content creators through casual contributors.

Following Mike’s presentation, Stilo’s Patrick Baker will present the case for a guided + fluid authoring solution for SME’s and casual contributors and provide a demonstration of AuthorBridge, a new cloud XML authoring service which delivers all the advantages of free-form authoring (user-friendly, with little or no restrictions), while ensuring that rich semantic content is generated under the covers, capturing the author intent without any of the complexities of XML.

View recording (registration required)

 

Meet the presenters

Mike Iantosca

A member of the IBM corporate leadership team for Information Development and Digital Services, Mike Iantosca manages an industry-leading portfolio of authoring and content management tools and systems used to create, manage, and publish more than 200 million pages of client-facing content in dozens of national languages for thousands of IBM products. With more than 30 years of structured content authoring and management design and development experience, Mike has the overall responsibility for the planning, design, development, and delivery of solutions that support a wide range of content creators, including thousands of professional product Information Developers, technical editors, Information Architects, globalization professionals, contributors, OEMs and IBM Business Partners.

Mike leads multiple software development and test teams across far-flung geographies that design, develop, test, and support advanced publishing technologies including structured based authoring and content management systems. Mike also provides product and project management leadership across all phases of product development including concept, development, test, deployment, maintenance and operations managing people, process, requirements, change management, budgets, schedules, suppliers, and contracts.

Patrick Baker

As VP, Development and Professional Services, Patrick Baker heads all new product development at Stilo and is actively engaged in the successful deployment of content processing solutions for publishing clients. Major ongoing product development efforts include OmniMark, the leading high-performance content processing platform, and Migrate, the world’s first cloud XML content conversion service. Patrick has been associated with best practices for complex content processing for over a decade and has successfully delivered custom solutions on behalf of organizations in the automotive, airline, defence, and commercial publishing sectors.

With a B.Sc. degree in Mathematics and a M.Sc. in Computer Science from McGill University, he leads an expert team of highly talented content processing specialists.


Migration to DITA – The Atmel Story

Presented by Morten Haaker, Senior Project Manager | Atmel Corporation

Are you thinking about moving to DITA? In addition to learning DITA, tool and vendor selection, creation of stylesheets, change management and other challenges, you may be faced with having to convert thousands of pages of detailed technical information from an unstructured format to DITA.

Learning from the experience of others can make a big difference to your project success and the Atmel story may just help you to better determine your migration strategy.

Join us for this Stilo DITA Knowledge Series webinar as we hear from Atmel Corporation’s Morten Haaker who will show how Atmel, a global semiconductor company, approached the issue of content conversion as an integral part of their DITA implementation.

Morten is a Senior Project Manager at Atmel Corporation responsible for the technical documentation project. Located in Norway, Morten has been with Atmel for more than 12 years. He holds a Master of Computer Science from the Norwegian University of Science and Technology and heads up the Nordic SDL LiveContent User Group.

Due to an unforeseen technical problem, we were unable to record this webinar.

View webinar slides

 

Meet the presenter

Morten Haaker

Morten is a Senior Project Manager at Atmel Corporation responsible for the technical documentation project. Located in Norway, Morten has been with Atmel for more than 12 years. He holds a Master of Computer Science from the Norwegian University of Science and Technology and heads up the Nordic SDL LiveContent User Group.