Title page for ETD etd-01062005-145509


Type of Document Dissertation
Author Sarigul, Erol
Author's Email Address esarigul@vt.edu
URN etd-01062005-145509
Title Interactive Machine Learning for Refinement and Analysis of Segmented CT/MRI Images
Degree PhD
Department Electrical and Computer Engineering
Advisory Committee
Advisor Name Title
Abbott, A. Lynn Committee Chair
Bell, Amy E. Committee Member
Conners, Richard W. Committee Member
Kline, D. Earl Committee Member
Schmoldt, Daniel L. Committee Member
Wang, Anbo Committee Member
Keywords
  • user interface
  • image segmentation
  • decision trees
  • machine learning
  • postprocessing
Date of Defense 2004-09-17
Availability unrestricted
Abstract
This dissertation concerns the development of an interactive machine learning method for refinement and analysis of segmented computed tomography (CT) images. This method uses higher-level domain-dependent knowledge to improve initial image segmentation results.

A knowledge-based refinement and analysis system requires the formulation of domain knowledge. A serious problem faced by knowledge-based system designers is the knowledge acquisition bottleneck. Knowledge acquisition is very challenging and an active research topic in the field of machine learning and artificial intelligence. Commonly, a knowledge engineer needs to have a domain expert to formulate acquired knowledge for use in an expert system. That process is rather tedious and error-prone. The domain expert's verbal description can be inaccurate or incomplete, and the knowledge engineer may not correctly interpret the expert's intent. In many cases, the domain experts prefer to do actions instead of explaining their expertise.

These problems motivate us to find another solution to make the knowledge acquisition process less challenging. Instead of trying to acquire expertise from a domain expert verbally, we can ask him/her to show expertise through actions that can be observed by the system. If the system can learn from those actions, this approach is called learning by demonstration.

We have developed a system that can learn region refinement rules automatically. The system observes the steps taken as a human user interactively edits a processed image, and then infers rules from those actions. During the system's learn mode, the user views labeled images and makes refinements through the use of a keyboard and mouse. As the user manipulates the images, the system stores information related to those manual operations, and develops internal rules that can be used later for automatic postprocessing of other images. After one or more training sessions, the user places the system into its run mode. The system then accepts new images, and uses its rule set to apply postprocessing operations automatically in a manner that is modeled after those learned from the human user. At any time, the user can return to learn mode to introduce new training information, and this will be used by the system to updates its internal rule set.

The system does not simply memorize a particular sequence of postprocessing steps during a training session, but instead generalizes from the image data and from the actions of the human user so that new CT images can be refined appropriately.

Experimental results have shown that IntelliPost improves the segmentation accuracy of the overall system by applying postprocessing rules. In tests two different CT datasets of hardwood logs, the use of IntelliPost resulted in improvements of 1.92% and 9.45%, respectively. For two different medical datasets, the use of IntelliPost resulted in improvements of 4.22% and 0.33%, respectively.

Files
  Filename       Size       Approximate Download Time (Hours:Minutes:Seconds) 
 
 28.8 Modem   56K Modem   ISDN (64 Kb)   ISDN (128 Kb)   Higher-speed Access 
  ETD_esarigul.pdf 3.17 Mb 00:14:41 00:07:33 00:06:36 00:03:18 00:00:16

Browse All Available ETDs by ( Author | Department )

dla home
etds imagebase journals news ereserve special collections
virgnia tech home contact dla university libraries

If you have questions or technical problems, please Contact DLA.