Tuesday, November 30, 2010

Offshoring work for medical diagnosis

A Wall Street Journal blog article headline states: India is benign for radiologists. It turns out that:

... reading such images relies heavily on what the two economists call “tacit knowledge.” Pattern-recognition software, which could make the work routine, doesn’t work very well in identifying malignancies and other problems,

The paper the blog refers to, was of course, talking about clinical diagnosis but one step removed from this is medical image analysis. Studies usually require databases (some very large) of subjects/cohorts who fit a common description. We might be quite far from full automatic diagnosis, but tasks can be broken down and piecemeal solutions can cetainly be outsourced. (I've been tempted more than once to go this route with the data I use or would use if only I could process them all.)

Suppose we want to build a database of twin brains or multiple sclerosis brains or whatever. Some of the tasks that are routine and could be outsourced are: registering the brains to a common template, segmenting specific sections of anatomy, representing the data in a certain way (in a representation space). Now with close supervision someone with basic training could easily do this. So the problem might be that there are not enough people who can supervise such work. And the people who can have not thought of setting up shop in India. Big pharma is interested in such studies so it could be a lucrative outsourcing venture if someone could put it all together.

Thursday, November 25, 2010

Tract-based white matter fiber analysis: what to study

The difficult problem in white matter fiber analysis (obtained from DTI or HARDI sources) is to efficiently handle the large volume of fibers. This may be achieved by clustering the fibers, by clustering using approximations such as the Nyström method, by sampling from the data in a fiber bundle or computing representative means as suggested here.

Volume-based methods side-step this issue by treating a white matter structure as a single anatomical unit. With tract-based analysis, however, processing individual fibers is usually a requirement. Tractography and clustering algorithms have made this possible. Improvements are needed but it is my view that making the pre-processing pipeline more efficient is an engineering effort.

While tedious processing is a disadvantage, tract-based methods offer the potential to study local parameters along a tract. This is important in the study of white matter disease
and, at this juncture, it is here that the maximum contributions to the medical community can be made.

References
C. Fowlkes, S. Belongie, F. Chung, J. Malik. "Spectral Grouping Using the Nyström Method", TPAMI. 26 (2) p.214-225.

Monday, November 15, 2010

White matter fiber analysis: why we need statistical summaries (means)

A key concept in medical image analysis is the idea of a mean template, i.e., a statistical average around which deviations can be assessed. In the context of white matter fiber analysis, we seek to represent an anatomically defined fiber bundle with a mean and variance that describes its essential characteristics. This mean is of interest for practical reasons that go beyond atlas construction. From my hard-won experience working with DTI fibers, these are 3 reasons we need to compute means:

1.There are a large number of fibers involved in white matter fiber analysis. The corpus callosum has over 300 million fibers alone, the whole brain, 100 billion! fibers--source: Mori's atlas. The tractography output which is some fraction of this can still be several thousand fibers. Due to this large volume, a practical first step in any study, and one that is advocated by me, is to compute a representative mean of a fiber bundle.

2. The tractography output is subject to error. Noise, imperfections in the image and the presence of regions of low anisotropy due to fiber crossings all contribute to this.
In order to make the streamline output more robust we can average over the fiber bundle. This strategy is also useful when a representative bundle is sought and there are discontinuities and other fiber damage due to disease.

3. To facilitate statistical analysis for population studies where the underlying problem is one of assigning membership to a group.

Means may be computed for the following situations:

i)for a group of fibers within a fiber bundle
ii) for an intra- or inter-subject collection of fibers from many bundles
iii)for an intra- or inter-subject collection of means of fiber bundles

The computation of statistical summaries is usually part of any framework for white matter
fiber analysis. Some of these frameworks are listed in my previous post on Mathematical frameworks.

Saturday, November 6, 2010

Tract-based quantitative white matter fiber analysis: Mathematical frameworks

This is a standalone post. It is also part of the Quantitative white matter fiber analysis
series posted here.



There are three steps in tract-based data analysis: pre-processing, modeling and statistical analysis. The pre-processing may involve registration, tractography segmentation or clustering and the information contained in these results is often incorporated into the data analysis model. Fractional anisotropy (FA) is one such artifact and the analysis of FA profiles along tracts is usually included in most studies. The quantitative frameworks use both simple ideas such as the alignment of fibers for statistical analysis and more advanced mathematical concepts such as currents or techniques in Riemannian geometry.
These methods for tract-based analysis are listed below:

1) Fiber-tract oriented statistics
This model by Corouge et al.[1] is a prototype for computing statistics along a fiber bundle. The core idea was to give a compact description of the geometry and diffusion properties of a fiber bundle along its length profile. Corresponding points for a set of fibers are aligned and averages of DTI indices along the cross-section are computed. This can be done with linear or nonlinear Riemannian spaces.
This basic idea has also been extended to a population atlas where information contained in tractography results[2] or registration results[3] are evaluated along a tract-based coordinate system.

2) Statistical modeling and EM clustering of white matter fiber bundles
Individual fibers are parameterized and aligned with the goal of performing tract-oriented analysis over a population. A statistical model of pre-segmented fiber bundles is then calculated which serves as a prior for mixture model clustering. The Expectation-Maximization algorithm is used to infer membership probabilities and cluster parameters. Atlas guided clustering gives anatomically meaningful bundles. This is work by Maddah et al. [4].

3) Statistical model of white matter fiber bundles based on currents
This is a flexible framework where fiber bundles (a collection of curves) are modeled as currents. The space of currents is a vector space equipped with an inner product and norm which defines the distance between two bundles. This is a global distance that does not use point-wise correspondences [5]. Durrleman et al. give a statistical model for a fiber bundle atlas and it variability in a population.

4) Comprehensive Riemannian Framework
In this framework, Mani et al.[6] use various combinations of shape, scale, orientation and position, the physical features associated with white matter fibers, to define Riemannian feature spaces. This is useful since applications have different objectives and often need different sets of tools and metrics for optimal results. For each joint manifold, a (geodesic) distance metric that quantifies differences between fibers as well as tools for computing statistical summaries of samples are defined. Correspondences between fibers are implicitly established during these pairwise comparisons. The framework may be extended to accomodate features such as scalar diffusion indices.

5) Statistical mapping of medial models of white matter tracts
Yushkevich et al.[7] use a different geometric model by giving major fiber tracts a medial representation. While this is strictly not a tract-based model, it is included here because local statistics of tensor-based features can be computed.

Bibliography:

1) Isabelle Corouge, P.Thomas Fletcher, Sarang Joshi, Sylvain Gouttard, Guido Gerig, "Fiber Tract-Oriented Statistics for Quantitative Diffusion Tensor MRI Analysis," Medical Image Analysis 10 (2006), pp. 786-798.

2) Lauren J. O'Donnell, Carl-Fredrik Westin and Alexandra J. Golby. "Tract-Based Morphometry for White Matter Group Analysis," NeuroImage. Volume 45, Issue 3, (2009), pp. 832-844.

3) Casey B. Goodlett, P. Thomas Fletcher, John H. Gilmore, and Guido Gerig. "Group Analysis of DTI Fiber Tract Statistics with Application to Neurodevelopment," NeuroImage. (2009), 45(1 Suppl): S133–S142.

4) Maddah M., Grimson W.E.L., Warfield S.K. "Statistical Modeling and EM Clustering of White Matter Fiber Tracts", Proceedings of the 3rd IEEE International Symposium on Biomedical Imaging. (2006);1, pp. 53-56.

5) M. Vaillant and J. Glaunès, "Surface matching via Currents", Proceedings of Information Processing in Medical Imaging, Lecture Notes in Computer Science vol. 3565, Springer (2005), pp. 381–392.

6)Meena Mani, Sebastian Kurtek, Christian Barillot, Anuj Srivastava. "A Comprehensive Riemannian Framework for the Analysis of White Matter Fiber Tracts," In ISBI'2010, pp. 1101-1104.

7) Yushkevich P.A., Zhang H., Simon T.J., Gee J.C., "Structure-specific statistical mapping of white matter tracts," (2008) NeuroImage, 41 (2), pp. 448-461.

Saturday, October 16, 2010

Quantitative white matter fiber analysis: a short history (Part III)

Part III: Quantitative tract-based analysis

This is the third post in this three part series. Parts I and II are here
and here.



It is useful to classify white matter data analysis in the three decades that followed the introduction of the first MRI scans as volume-based or tract-based. The preprocessing path flows and possibilities for data interpretation differ in these two approaches.

Volume-based Anatomical structures or regions of interest are treated as volumes
and quantitative information is smoothed or averaged in such a way that the local variation
in individual tracts is not preserved. The white matter structures are usually segmented by thresholding FA maps though fiber tracts have also been used. Group-wise registration for population studies may be done at the voxel level or across individual volumes. A voxel-based coordinate system is used in the first case and a structure-based coordinate system in the second.

Tract-based The emphasis is on fiber tracts and parameters that vary along a fiber or anatomically defined bundle. Diffusion indices such as FA and physical descriptors such as shape are typically studied to assess fiber integrity or changes due to disease.



Tract-based image analysis was made possible only after the first tractography algorithms were introduced. In 1999, Mori et al. ushered in tract-based image analysis by reconstructing fiber pathways in a rat brain. Improvements to the basic tractography algorithm and work in clustering paved the way for data analysis. These three stages of what is a developing field are summarized below.

I) Tractography
Tractography or fiber tracking, as the name suggests, is a way to follow the direction of the local white matter diffusion from voxel to voxel. For DTI, the simplest algorithms follow the direction of the principal diffusion tensor eigenvector in a deterministic fashion. The reconstruction process, which includes curvature thresholds and other termination criteria, generates a tract or streamline. More sophisticated approaches include interpolations for smoother pathways, the use of anatomical and topological constraints to guide the tracking and ways to deal with the uncertainty at each voxel due to noise and registration errors.
Streamline tractography is also used in conjunction with high angular resolution (HARDI) methods.

II) Clustering
The mass of DTI fibers rendered was not immediately available for analysis. To organize and pare them down into meaningful fiber tracts, clustering was used and this led to a study of these methods.

III) Mathematical Frameworks

(Due to the length of this post, I will cover the data analysis frameworks in my next post.)

Bibliography
Mori, S., Crain, B. J., Chacko, V. P. and Van Zijl, P. C. M. (1999), Three-dimensional tracking of axonal projections in the brain by magnetic resonance imaging. Annals of Neurology, 45:265–269.

Saturday, September 25, 2010

Quantitative white matter fiber analysis: a short history (Part II)

Part II: Imaging

This is the second of three parts. Parts I and III are here and here.



With the advent of soft tissue imaging technology--computed tomography (CT) in 1972 and magnetic resonance imaging (MRI) in 1977--it was possible to examine living brains. In 1982, physicians saw multiple sclerosis (MS) lesions for the first time in a live patient. Since then, clinicians have increasingly relied on brain scans for diagnosis and treatment. With this in vivo technology, disease progression in patients could be tracked by physicians and researchers, either individually, or as part of a longitudinal study of cohorts. This has led to a better understanding of white matter degenerative disease and has improved treatment options.

The MR signal can be assessed in different ways and the 1990s saw the emergence of two important MRI modalities. The first, Seiji Ogawa's 1990 proposal to use contrasts in blood oxygen response to map changes in brain activity, led to the development of functional magnetic resonance imaging (fMRI). The ability to view the brain in real time was a big step forward; it enabled us to study brain function and is responsible for the widespread use of fMRI among clinical neurologists, behavioral scientists, neuroscientists and others.

Diffusion tensor magnetic resonance imaging (DTI) was the second important MRI modality introduced. Water constitutes a big part of living tissue--white matter is 72% water--and the physical flow of fluid is described by a diffusion process. In 1994, Peter Basser, James Matiello and Denis Le Bihan, in a landmark paper, proposed a tensor model for diffusion where the flow of water was described by the magnitude and direction of the principal eigenvector at each image voxel. White matter fibers are inherently anisotropic and the first applications of DTI were studies of neural connectivity where fibers were tracked from end to end. Since the resolution of DTI is at the cellular level, it was also possible to detect disease--through indices such as fractional anisotropy (FA)--before it appeared in conventional MRI scans. Normal appearing white matter (NAWM) in MS is one example where compromised integrity manifests through lower FA values.

A second-order tensor model is adequate for DTI reproductions of coherent fiber tracks but in cases where fibers meet or cross, only one of these directions is retained. DTI tractography of callosal fibers, where the lateral projections are attenuated, is illustrative of this limitation. To overcome this shortcoming, high angular resolution diffusion imaging (HARDI) images acquired in several spatially uniform directions has been used. An orientation distribution function (ODF) that can model multiple maxima representing the different fiber directions replaces the simple tensor model at each voxel. HARDI datasets offer better resolution for important DTI applications such as connectivity studies and preoperative investigations.

Saturday, September 18, 2010

Quantitative white matter fiber analysis: a short history (Part I)

Part I: Histological investigation
This is the first post in a three part series. Parts II and III are here and here.


The modern scientific study of white matter has its roots in the 19th century when links were being established between mental dysfunction and neuroanatomy. Correlations made between postmortem abnormalities in the brains of mental patients and clinical evaluations while they were living led to important discoveries. The identification of the arcuate fasciculus as a language pathway that connected the two language centers, the Broca and Wernicke regions, is one famous example. In that case, Carl Wernicke, who was developing language network models, made the association between lesions in the arcuate fasciculus and the various aphasias he had observed.

The impetus from these investigations crossed over to other developments. Theodor Meynert, the reputed neuroanatomist, had classified prominent white matter tracts or fasciculi, as they were known, based on the kinds of connections they made. Burdach and Déjérine published postmortem atlases, and both prominently included white matter dissections. New techniques for histopathological analysis were introduced. Notable among these was Camilio Golgi's staining method and Santiago Ramón y Cajal's use of it in his histolgical studies of nerve fibers.

Quantitative white matter fiber analysis benefited from these cumulative efforts which made studies in fiber thinning, demyelination and microstuctural damage possible. Today, postmortem dissections still give the most precise quantitative assessments.


Note of appreciation: This write-up was compiled based partly on Marco Catani's--I have pointed him out before--voluminous publications. He writes exceedingly well on the subject of language networks and related themes.

Tuesday, July 27, 2010

MRI art

Artful artichokes, showy 'shrooms,
seeds cantilevered in cantaloupe.

Fractal flows,
and more
at this MRI show.

Saturday, July 24, 2010

To find a mean in a nonlinear manifold

These are some notes on the Karcher mean. I will be updating this post hopefully in the coming weeks.


In a Euclidean space, for a set of k points, x_1, x_2 ... x_k, the sample mean is:
In a nonlinear manifold, a simple summation is no longer possible. We can, however, make an extrinsic computation by embedding the manifold in a vector space, computing the Euclidean mean and projecting the result back into the manifold. A disadvantage of this approach is that the mean computed depends on the choice of embedding.

A second possibility is an intrinsic computation, i.e., one where we use intrinsic manifold computations to compute the mean.

To compute an intrinsic mean within a manifold, M, we use the concept of the mean as the centroid of a density. This idea was put forward by Fréchet to calculate means in a Riemanniann manifold. The computation involved a minimization but the existence and uniqueness of the resulting mean could not be guaranteed (see Pennec's 1999 NSIP paper for details). Karcher's proposal that a local instead of a global mean be used (see Karcher's 1977 paper), led to a practical implementation. We shall henceforth refer to this local mean as the Karcher Mean.


(To be updated ...)



Karcher Mean references I found helpful
Ricardo Ferreira et al. have a paper entitled Newton Method for Riemannian centroid computation in naturally reductive homogeneous spaces which has implementation details such as the intrinsic manifold computations for well known manifolds such as the sphere, the special orthogonal group, SO(n), and the space of positive definite matrices.

Bibliography
1) M. Fréchet, "Les elements aléatoires de nature quelconque dans un espace
distancié," Annales de l'Institut Henri Poincaré, Vol. 10, (1948) pp. 215-310.
2) X. Pennec, “Probabilities and statistics on Riemannian manifolds: Basic tools for geometric measurements,” in Proc. NSIP'99, Vol. 1, (1999), pp. 194–198.
3) H. Karcher, Riemannian center of mass and mollifier smoothing. Commun. Pure and Appl. Math. 30 (1977), pp. 509–541.

Thursday, July 1, 2010

The corpus callosum and interhemispheric communication

The corpus callosum (CC), with over 300 million fibers, is the largest white matter fiber bundle in the human brain. (It is easily identifiable in conventional MRI scans.) Topographically, it is centered along the midsagittal plane with radiations that extend to the prefrontal and frontal cortex in the anterior brain, the sensory-motor cortex in the middle and the parietal, temporal and occipital lobes in the posterior half of the brain.

This large and heterogeneous collection of fibers is responsible for interhemispheric communication. Michael Gazzaniga, a neuroscientist at Dartmouth College, has being studying the nature of this left brain- right brain communication for over 30 years. Here he explains some of his fascinating findings to Alan Alda, former hawkeye, now host of Scientific American Frontiers. And this is one of Gazzaniga's papers. (A similar account of the mysterious workings of the brain first got me interested in brain imaging. The book in question was V.S. Ramachandran's Phantoms in the Brain.)

Because of the important role it plays, the CC is the focus of many studies. Some are concerned with changes in the shape, size or structure of the CC. These changes may occur due to aging or degenerative disease. On one end of the spectrum, work is being done to provide tools to measure and monitor these changes. At the other end are the clinical studies. An example of a clinical study might be one that links the different stages of the disease or aging process with physical alterations.

Tuesday, June 22, 2010

White Matter Fiber Analysis

Shape, scale, orientation and position, the physical features associated with white matter fibers, can, either individually or in combination, be used to define feature spaces designed for specific end-applications. Such a treatment is useful since the quantitative analysis of white matter fibers has diverse applications, each with a different focus and objective.

In recent work, we describe a Riemannian framework in which various combinations of these features are considered. (This was presented at the ISBI 2010 conference. The slides are here, a version of the paper here.)

The framework also provides tools for computing statistical summaries of curves which enables us to perform a full statistical analysis. In the context of DTI fibers, a mean and variance that describes the essential characteristics of the fiber bundle can be used to represent a set of fibers. We can then proceed to tasks of statistical inference such as parameter estimation and hypothesis testing.

I am currently using the tools and metrics defined within this mathematical framework to show how morphological changes due to disease progression can be studied. Shape distances in tandem with distances defined within other manifolds like the shape+orientation manifold give us very encouraging results.

Wednesday, June 2, 2010

Differential geometry in 10 slides

Partha Niyogi's very lucid talk entitled Geometric Methods and Manifold Learning includes a brief and very basic introduction to differential geometry(starts at t=40:49) which I found helpful.

This was part of the Machine Learning Workshop I attended at the University of Chicago last June (MLSS'09). There were several other talks and tutorials of note. I especially enjoyed Emmanuel Candes' talk on sparse signal recovery. The talks are available at the videolectures website.

Saturday, May 8, 2010

Tagore tribute

There is a beautiful song in the 1961 Hindi movie Kabuliwalah that I thought was a fitting tribute to Tagore on his sesquicentennial. Tagore's original story was perhaps less sentimental but just as deeply touching.

A hundred years ago, Kabuliwallahs plied the streets of North India. With tales to tell and wares to sell, they, as outsiders seemed to have captured the Indian imagination. Here is Dharmendra, a man of many disguises, doing his Kabuliwalah impersonation in the Bond-style masala movie Jugnu.

India has had a complex relationship with the Afghans. Tagore's sympathetic portrayal of a man from Kabul should be seen in the context of the dark shadow of history. Yes, Tagore was a great humanist and an early feminist too.

Wednesday, March 24, 2010

Finally a blip in the low signal to noise newscape

Grigory Perelman once again spurns a hefty honorarium. I love his style, I share his disgust.

Friday, February 12, 2010

ISBI 2010

I'll be in Rotterdam in mid-April to present our paper entitled A Comprehensive Riemannian Framework for the Analysis of White Matter Fiber Tracts at the ISBI conference. This is the abstract:

A quantitative analysis of white matter fibers is based on different physical features (shape, scale, orientation and position) of the fibers, depending on the specific application. Due to the different properties of these features, one usually designs different metrics and spaces to treat them individually. We propose a comprehensive Riemannian framework that allows for a joint analysis of these features in a consistent manner. For each feature combination, we provide a formula for the distance, i.e. quantification of differences between fibers and a formula for geodesics, i.e. optimal deformations of fibers into each other. We illustrate this framework in the context of clustering fiber tracts from the corpus callosum and study the results from different combinations of features.


This is work I did with Anuj Srivastava and his student Sebastian Kurtek.

Thursday, January 21, 2010

India Unbound

Dense Delhi fog followed me to Gwalior, Jhansi, Orchha--everywhere it seems, even France. I spent most of the first week of the new year in and out of airports and railway stations.
But this post is not about the long journey back to Rennes. India Unbound, Gurucharan Das' personal account of the socio-economic history of post-independence India is a compelling read. The story of a Punjabi family with roots on the other side of the Radcliffe divide and their middle class aspirations annotate discussions on economic policy. What could be dry chapters on Nehru, Shastri and the never-ending Gandhi line are brought to life with vignettes that have a "I was there" ring of authenticity.
It is this story telling that makes this book different from others. It is as easy to read as a well-researched work of fiction in the style of Amitav Ghosh.