INCT-MACC Annual Report

Transcrição

INCT-MACC Annual Report
Annual Activity Report
National Institute of Science and Technology
in
Medicine Assisted by Scientific Computing
INCT-MACC
2009 –2010
Brazil, 01 August 2010
Director
Raúl Antonino Feijóo (LNCC/MCT)
Vice-Director
Artur Ziviani (LNCC/MCT)
International Technical and Scientific Board
Alfio Quarteroni (École Polytechnique Fédérale de Lausanne, Switzerland)
Enrique Zuazua (Universidad Autónoma de Madrid, Spain)
David Roger Jones Owen (Swansea University, United Kingdom)
Eric Fleury (Institut National de Recherche en Informatique et en Automatique, France)
Nicolas D. Georganas (University of Ottawa, Canada) – In memorian 1
Radha Nandkumar (National Center for Supercomputing Applications, United States)
Abdul Jamil Tajik (Mayo Clinic, United States)
Management Committee
Nelson Albuquerque de Souza e Silva (UFRJ)
Marco Antonio Gutierrez (INCOR)
Alexandra Maria Vieira Monteiro (UERJ)
Alair Augusto Sarmet Moreira Damas dos Santos (UFF)
Márcio Sarroglia Pinho (PUCRS)
Executive Committee
Pablo Javier Blanco (LNCC/MCT)
Gustavo Carlos Buscaglia (USP)
Denise Guliato (UFU)
Gilson Antonio Giraldi (LNCC/MCT)
Jauvane Calcavante de Oliveira (LNCC/MCT)
Selan Rodrigues dos Santos (UFRN)
Antonio Tadeu Azevedo Gomes (LNCC/MCT)
Débora Christina Muchaluat Saade (UFF)
Bruno Schulze (LNCC/MCT)
Maria Cristina Silva Bôeres (UFF)
1
Sadly, Prof. Nicolas Georganas passed away in July 2010.
Contents
Foreword about the INCT-MACC..............................................................................................5
1) Summary ..............................................................................................................................8
2) Introduction...........................................................................................................................8
2.1) Areas of expertise within the INCT-MACC .....................................................................9
2.2) Institutional references .................................................................................................10
2.2.1) Mission ..................................................................................................................10
2.2.2) Vision.....................................................................................................................10
2.2.3) Values and principles.............................................................................................10
3) Structure and organization of the INCT-MACC ................................................................... 11
3.1) Organogram ................................................................................................................. 11
3.2) Facilities .......................................................................................................................12
3.2.1) Headquarters.........................................................................................................12
3.2.2) Associated laboratories .........................................................................................12
3.2.3) Collaborator laboratories .......................................................................................14
3.3) Meetings and interaction among laboratories ..............................................................15
3.4) Computational and equipment infrastructure ...............................................................16
3.5) Extranet........................................................................................................................17
4) Science and technology highlights......................................................................................19
4.1) Area 1: Modeling of physiological systems...................................................................20
4.1.1) Computational hemodynamics via dimensionally-heterogeneous models.............20
4.1.2) Closed-loop models of the cardiovascular system.................................................25
4.1.3) Mesh generation....................................................................................................27
4.1.4) Geometrical modeling............................................................................................29
4.1.5) Simulation of complex flows with moving interfaces ..............................................29
4.1.6) Quantification of left ventricle movement and applications ....................................30
4.1.7) Bone trauma and its applications...........................................................................31
4.1.8) Other research lines in this area of expertise ........................................................36
4.1.9) Developed innovative technologies .......................................................................41
4.2) Area 2: Medical image processing ...............................................................................44
4.2.1) Segmentation, feature extraction and registration .................................................44
4.2.2) Classification and pattern recognition ....................................................................46
4.2.3) Visualization of 3D datasets and software development........................................47
4.2.4) Distributed visualization and management systems ..............................................49
4.2.5) Digital prosthesis design........................................................................................49
4.2.6) Automated segmentation techniques with applications for Alzheimer’s disease....50
4.2.7) Deriving fuzzy rules based on rough sets with the aim of pattern classification.....55
4.2.8) Quantification of left ventricle movement (continuation to 4.1.6) ...........................56
4.3) Area 3: Collaborative virtual environments ...................................................................58
4.3.1) Immersive & collaborative virtual environments for medical training .....................58
4.3.2) Multi-sensorial virtual environments for medical data visualization........................67
4.3.3) New methodologies for haptic use in medical collaborative virtual environments..69
4.3.4) Framework for biopsy exam simulation .................................................................72
4.3.5) Breast 3D anatomic atlas ......................................................................................74
4.3.6) Virtual and augmented reality for training and assessment of life support
procedures.......................................................................................................................79
3
4.4) Area 4: Information systems in health ..........................................................................84
4.4.1) Acute miocardial teleconsultation and monitoring system (AToMS) .......................84
4.4.2) Syndromic surveillance decision support system for epidemic diseases...............89
4.4.3) QoS Support for Health Information Systems over Wireless Mesh Networks .......91
4.5) Area 5: Distributed computing cyberenvironments .......................................................93
4.5.1) Hemolab and Imagelab on a private cloud ............................................................93
4.5.2) Performance and deployment evaluation of applications ......................................94
5) Science and technology results ..........................................................................................98
5.1) Publications in journals ................................................................................................98
5.2) Book chapters ............................................................................................................102
5.3) Publications in conference proceedings.....................................................................103
5.4) D.Sc. theses, M.Sc. dissertations and undergraduate monographs........................... 116
5.5) Scientific events organization.....................................................................................124
5.6) Participation in conferences .......................................................................................125
5.7) Software development ...............................................................................................125
5.8) Awards .......................................................................................................................126
4
Foreword about the INCT-MACC
The advance of scientific computing has promoted a development without precedents in the
history of the human society. The popularization of the personal computer, the advent of the
internet, the development of wireless communication, of the high performance distributed
computing, distributed databases and data mining for knowledge discovery, monitoring
techniques, scientific visualization, virtual reality, numerical simulation and computational
modeling of complex systems, nowadays permeate several human activities giving rise to
huge and deep changes.
In medicine, this new reality had its roots in the beginning of the 20th Century, when it was
called telemedicine (according to the World Health Organization, “telemedicine is the supply
of services related to healthcare when distance is a critical factor. Such services are provided
by healthcare professionals, using communication and information technologies…”).
However, this is currently outdated in front of the possibilities that arise with the use of the
newer technologies mentioned above. For example, through the computational modeling of
complex physiological systems that couple, by means of several multiple spatial scales (from
1nm for the pore size in ion channels to 1m for the size of the human body) and temporal
scales (from 10-6 s for the Brownian motion to 109 s of our life-time), the biochemistry, the
biophysics and the cellular anatomy, tissues and organs, is possible to gain insight into their
functioning under normal conditions, as well as under conditions changed by pathological
processes or medical procedures, giving additional information that contribute to the
enhancement of diagnosis, treatment and planning of diverse medical procedures.
In Brazil, as in several developing countries, there exists another even greater requirement,
before the data manipulation, that is the demand for standardization with its subsequent
integration. This demand appears due to the common practice of the generation of data in
various formats for various medical centers.
In this sense, a system of national scope must rest, on computational environment that
integrates data system and medical services (which incorporate computational modeling and
simulation) based on the most modern information technologies that allow the different
regions of our large territory to be able to make use of it.
Such system must offer basic services integrated with modern technologies such as remote
monitoring with transmission and processing of biological signals and medical images,
planning medical procedures via computational modeling and simulation, techniques of virtual
and augmented reality and training and formation of human resources. At the same time it is
important to offer the possibility of carrying out meetings through videoconference as well as
data analysis through immersion in virtual environments capable of reproducing with high
degree of detail the corresponding reality.
Finally, the more important point is the access for the poor population to all these modern
technological resources. Basic services and new technologies can be integrated through a
low cost network, like the Internet, turning the Brazilian public health services into users of
high technologies in the medicine area.
Such technological resources, not yet fully available in any medical system, can be supplied
in a relatively short time and at a low cost. Generally speaking, its implementation depends on
the deployment of network services and automated systems for medical and administrative
5
control and integration through a computational infrastructure inserted in a Grid environment
for high performance distributed computing. In this environment, highly competent
multidisciplinary teams from many areas of human knowledge promote transfer mechanisms
and innovation in the solution of important national problems related to health.
On the other hand, the need for virtual organizations (such as the National Institutes of
Science and Technology), where groups of researchers (which may be geographically
dispersed) act through a thematic and cooperative network, is a reality in our country
consistent with the need to work on highly complex multi- and inter-disciplinary issues such as
those required by this new trend of medicine assisted by scientific computing.
To operate such organizations, and as already mentioned, we need collaborative
environments, not only for development of applications but also for user access to
computational resources.
In Brazil, there is infrastructure as well as scientific expertise to meet the needs mentioned
above. Thus, we identified the National Centers for High Performance Processing
(CENAPADs), integrated by the SINAPAD (National System for High Performance
Processing), managed by LNCC/MCT (National Laboratory for Scientific Computation, from
the Ministry of Science and Technology, where INCT-MACC is headquarted) and connected
by a high-performance network (from hundreds of Mbps to Gbps) provided by the RNP
(National Research Network) and offering a computational capacity of tens of Tflops. It follows
then that there is a network in Brazil where applications specifically developed for medicine
can be incorporated immediately.
We see thus that the proposal for the establishment of this INCT-MACC (National Institute of
Science and Technology in Medicine Assisted by Scientific Computing) under the coordination
of LNCC/MCT is not motivated by the appearance of the Call 015/2008 of CNPq, but it is the
result of a long journey where it was possible to integrate researchers in areas seemingly
distant (engineering, computation, and medicine/health), overcoming obstacles of "language"
and scientific approach to the same phenomenon. The team thus constituted has expertise to
meet the enormous scientific and technological challenges that the present project demands
and to develop applications that allow the integration of medical data in a network of high
technology, as well as modeling systems and simulators for various areas of medicine,
making possible also the consolidation of something that is has been articulated from a long
time: the National Network of R&D, for Human Resources Formation and for the
Development of Technological Innovations in Medicine Assisted by Scientific
Computing.
This document is a small amount of results corresponding to large efforts of the members of
the INCT-MACC throughout its first year of existence. Indeed, the machinery of the INCTMACC, which started at a slow speed and even at an earlier stage than that when the funds
appeared (September-November, 2009), attained now its full power regime as a consequence
of the conviction, passion, and enthusiasm of its members deployed on this initiative.
Raúl A. Feijóo
INCT-MACC Director
6
7
1) Summary
This document presents the Annual Activity Report corresponding to the activities carried out
within the context of the National Institute of Science and Technology in Medicine Assisted by
Scientific Computing (INCT-MACC) during the period 02/09/2009 – 01/08/2010. Such
activities range from the available results in terms of science and technological contributions
to the future perspectives that will be sought in the forthcoming years. After a brief
introduction to the INCT-MACC and the associated and collaborator laboratories involved in
this institute, the main contributions and the ongoing works are featured.
2) Introduction
The INCT-MACC was founded on November the 28th, 2008, within the Program of National
Institutes of Science and Technology and is characterized by the following general aspects:

cooperative and multidisciplinary R&D network of continental extent;

focus on R&D activities;

commitment with the development of technological innovations in its areas of interest.
While the first fundings appeared in the period of September (FAPERJ) to November (CNPq),
2009, the members of the INCT-MACC had been already pursuing the goals within the
different areas of expertise of the INCT-MACC.
The aim of the INCT-MACC is to connect, in a consistent manner, areas (medicine+computer
science+modeling) that, a priori, progress separately and whose intersection, as far as Brazil
is concerned, is very low (or nearly null). Because of this, we can say that the project is
inherently innovative and that, since it permeates disconnected areas of science and it
creates connecting elements among them, have all the scientific characteristics that a
relatively new and multidisciplinary highly promising area of science has (computer science
applied to medicine). This concept of Medicine Assisted by Scientific Computing produces, as
a consequence, a new area of research in which Brazil should embark in order to meet their
domestic needs as a developing country in the direction of becoming a developed country.
Although nowadays there is well-established knowledge in various areas of scientific
computing, the use of these concepts in medicine is still in an early stage and any effort with
the purpose of enhancing our understanding and of developing new techniques and suitable
scientific methodologies should be considered absolutely worth the try. Without a doubt, this
is the science conception at its maximum expression, and gives the project relevance not only
in terms of scientific innovation (as said), but also in terms of technological innovation and
impact in the society. In view of this, it is necessary to implement an oriented and sustained
research in order to achieve the objectives and goals as the ones established in this project.
The combination of medical image processing + computational modeling + medicine +
distributed and networked systems generates many possibilities in the sense of moving to
another level with respect to the use of technology applied to medicine. The incorporation of
computational modeling and simulation in medicine portends a much greater knowledge
about the phenomena that takes place within our organism, complementing in a significantly
way the information that we get currently through data acquisition machines (that is the way in
which the technology is inserted in medicine these days). These precise, personalized,
8
clinically qualified models will be used to develop medical innovative technologies for
predicting change, detecting and diagnosing conditions, stimulating physical and biochemical
actions and measuring their effects, commanding prostheses, planning, optimizing and
providing assistance for therapy protocols and surgical interventions as well as for emergency
medical attendance and vigilance in public health and for training and assessing attendance
support.
2.1) Areas of expertise within the INCT-MACC
Year-by-year, the INCT-MACC will focus its R&D activities seeking, on one hand, to enable
the immediate integration and acquisition of knowledge and technologies already established
and developed by researchers and participants and, on the other hand, to be the vehicle for
the appearance of new knowledge in relevant areas of health and for the development of new
applications to the service of the medicine in the country. Particularly, the topics addressed by
the research groups within the INCT-MACC are summarized in the following:
1. Computational modeling and simulation of complex physiological systems with
emphasis on
a. the Human Cardiovascular System. In order to be able to meet the more
relevant topics within this point the activities are focused on:

computational modeling and simulation of the cardiovascular system;

computational modeling and simulation for the automatic identification
of mechanical properties of tissues;

computational modeling of the cardiovascular system for patients with
ischemic heart disease;

multiscale modeling in the computational modeling of biological tissues;
b. bone trauma and its applications in diagnosis, treatment and planning of various
medical procedures.
2. Advanced medical image processing including visualization and three-dimensional
reconstruction of patterns with medical relevance with emphasis on:
a. medical image processing for automatic segmentation of anatomical structures
oriented to the modeling and simulation;
b. medical image processing for automatic segmentation of anatomical structures
oriented to computer aided diagnosis.
3. Collaborative virtual environments of virtual and augmented reality and
telemanipulation in the medical area for medical training, human resources formation
and surgical planning with emphasis on:
a. new methodologies to incorporate of haptic devices in collaborative systems for
the medical area;
b. development of a Framework for simulation of biopsy test;
c. 3D anatomical atlas applied breast-related information;
9
d. development of techniques to support software production applied to health
activities;
e. virtual and augmented reality for the training and assessing attendance support.
4. Health information systems, with applications to emergency healthcare and syndromic
surveillance.
5. High performance distributed computing cyberenvironments for medical applications.
These five areas of research and development will be shortly referred to in this text as:
1. Area 1: Modeling of physiological systems.
2. Area 2: Medical image processing.
3. Area 3: Collaborative virtual environments.
4. Area 4: Health information systems.
5. Area 5: Distributed computing cyberenvironments.
2.2) Institutional references
2.2.1) Mission
Conducting research and development in computer science and its applications in medicine,
especially the computer simulation and modeling of the physiological systems that integrate
the human body; promoting the development of medical image processing, scientific
visualization and virtual reality in the development of medical applications directed to
computer-aided diagnosis, treatment, surgical planning, medical training and accreditation;
employing the most modern techniques of communication and multimedia transmission
develop and manage high performance computing environments that meet the needs of
medicine assisted by the scientific computation of the country; forming human resources and
promoting transfer of technology and innovation to the area of health assisted by scientific
computer science.
2.2.2) Vision
Both in the national and in the international level to be an institute of excellence in scientific
computation applied to medicine, acting as a reference in activities of research and
development, technology transfer and innovation and training of highly qualified human
resources in the area mentioned above; being responsible for the development of high
performance computing environments for the medical applications developed to be available
to communities related to research and health and, as a consequence, to serve the people
through the National System for High Performance Processing - SINAPAD.
2.2.3) Values and principles
Excellence and respect to the merit and scientific values, stimulating creativity, the human
resources formation and ongoing training, promoting the utmost dedication and efficiency in
conducting these activities, which should be implemented with transparency and ethics,
10
including the public and social responsibility of a national institution open to society.
3) Structure and organization of the INCT-MACC
The INCT-MACC gathers, within the central subject of Medicine Assisted by Scientific
Computing, the best research groups in frontier areas of science, promoting basic and
fundamental scientific research at an international competitiveness level, stimulating the
formation of human resources in different technical and scientific levels, and supporting the
scientific and technological innovation in medical/health applications of great impact for the
welfare of the population.
3.1) Organogram
The INCT-MACC was structured such that the interaction among laboratories pertaining to the
same or even different areas of research was maximized. Thus, the research Teams compose
the different Associated and Collaborator laboratories, which are gathered in the five major
areas of expertise of the INCT-MACC. There is the Management Committee of INCT-MACC
composed by five researchers of the Project team and is chaired by the Project Coordinator.
In turn, there is an Executive Committee, which is chaired by the project vice-coordinator, and
is composed by ten researchers chosen by the Management Committee within the members
of the INCT-MACC research team taking into account the technical and scientific excellence.
The representativeness of the group of researchers of INCT-MACC is assured in the
Executive Committee through the following composition: 02 (two) researchers for each one of
the 05 (five) R&D areas of the institute. The coordination of INCT - MACC, as well as the
Management Committee, will be advised in the implantation of the scientific and technological
policies and in the definition of the Scientific and Technological Master Plan of the INCTMACC by an International Technical and Scientific Board composed by worldwide renowned
researchers in the areas of interest of the institute. Figure 3.1 summarizes this structure.
11
Figure 3.1.1. Organogram of the INCT-MACC.
3.2) Facilities
3.2.1) Headquarters
The headquarters of the INCT-MACC are located in the National Laboratory for Scientific
Computing of the Ministry of Science and Technology (LNCC/MCT), Petrópolis, Rio de
Janeiro.
3.2.2) Associated laboratories
The Associated laboratories are research groups from Brazil playing the main role within the
areas of research and development of technological innovations. There are 23 associated
laboratories distributed across the country according to the map shown in Figure 3.2.1.
12
Figure 3.2.1. Distribution of Associated laboratories all around Brazil.
The Associated laboratories members of the INCT-MACC are the following:
1. HeMoLab - Hemodynamics Modeling Laboratory, LNCC/MCT,
2. ComCiDis - Distributed Scientific Computing Laboratory, LNCC/MCT, RJ.
3. Scientific Visualization and Virtual Reality Laboratory - LVCRV, LNCC/MCT, RJ.
4. ACiMA - Collaborative Environments and Applied Multimedia Laboratory, LNCC/MCT,
5. MARTIN - Mechanisms and Teleinformatics Architecture Laboratory, LNCC/MCT, RJ.
6. Instituto do Coração Edson Saad e Serviço de Cardiologia do Hospital Universitário
Clementino Fraga Filho da Faculdade de Medicina da Universidade Federal do Rio de
Janeiro.
7. Grupo UFF-Telemedicina da Universdade Federal Fluminense, RJ.
8. Laboratório de Telessaúde do Centro Biomédico da Universidade Estadual do Rio de
Janeiro.
9. Grupo Associado MLHIM (Multi-Level Healthcare Information Modeling) da Faculdade
13
10. Laboratório de Grid do Instituto de Computação da Universidade Federal Fluminense,
UFF, RJ.
11. OptimizE - Engineering Optimization Laboratory, COPPE, UFRJ, RJ.
12. Instituto do Coração do Hospital de Clinicas da Faculdade de Medicina da
Universidade de São Paulo (InCor - HC FMUSP), Serviço de Informática, São Paulo.
13. Laboratório de Aplicações de Informática em Saúde (LApIS) da Escola de Artes,
Ciências e Humanidades - Universidade de São Paulo
14. Laboratório de Computação de Alto Desempenho (LCAD), Instituto de Ciências
Matemáticas e de Computação, Universidade de São Paulo, Campus de São Carlos.
15. Grupo de Computação Ubíqua da Universidade Federal de São Carlos, UFC, SP.
16. Laboratório de Tecnologias para o Ensino Virtual e Estatística (LabTEVE) da
Univesidade Federal da Paraíba, Joao Pessoa, PB.
17. Engenharia Biomédica da UnB - Gama, Brasília.
18. Laboratório de Banco de Dados 2 (LBD) - FACOM - da Faculdade de Computação da
Universidade Federal de Uberlandia - MG.
19. LEBm - Laboratório de Engenharia Biomecânica do Hospital Universitário da
Universidade Federal de Santa Catarina.
20. Grupo de Realidade Virtual da Pontifícia Universidade Católica do Rio Grande do Sul
(PUCRS).
21. C3SL - Centro de Computação Científica e Software Livre do Departamento de
Informática da Universidade Federal de Paraná, UFPR.
22. Laboratório de Visualização e Realidade Virtual, Departamento de Informática e
Matemática Aplicada da Universidade Federal do Rio Grande do Norte DIMAP/UFRN.
23. Grupo de Redes, Engenharia de Software e Sistemas (GREaT) do Departamento de
Computação da Universidade Federal do Ceará, UFC.
3.2.3) Collaborator laboratories
The Collaborator laboratories are external institutions that play a complementary, yet
fundamental, role in the activities of the INCT-MACC. These are institutions from abroad that
are strongly linked to Associated laboratories. The worldwide distribution of the INCT-MACC
network through the connection with institutions from abroad is shown in Figure 3.2.2.
14
Figure 3.2.2. Distribution of Collaborator laboratories from abroad.
The Collaborator laboratories members of the INCT-MACC are the following:
1. División de Mecánica Computacional of the Centro Atómico Bariloche, Bariloche,
Argentina
2. Departamento. de Mecánica & Laboratorio de Bioingeniería, Facultad de Ingeniería,
Universidad Nacional de Mar del Plata, Mar del Plata, Argentina.
3. PLADEMA, Universidad Nacional del Centro de la Provincia de Buenos Aires, Tandil,
Argentina.
4. Department of Electrical and Computer Engineering, and Department of Surgery and
Radiology, University of Calgary, Calgary, Alberta.
5. Group for Computational Imaging & Simulation Technologies in Biomedicine, Pompeu
Fabra University, Barcelona, Spain.
6. École Superiore d’Ingenieurs en Electronique et Electrotechnique, Paris, France.
7. École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland.
8. Modelling and Scientific Computing, Dipartimento di Matematica, Politecnico di Milano,
Milano, Italy.
9. University of Wales Swansea, Swansea, UK.
10. Universidade do Porto, Porto, Portugal.
11. University College, London, UK.
3.3) Meetings and interaction among laboratories
The interaction among Associated and Collaborator laboratories has been possible due to the
strong adherence of researchers not only to develop technical visits, but also online and
virtual meetings.
15
At least, for researchers working in the same area of interest they have been encouraged
whether to arrange
i.
a series of meetings during the 1st Workshop on Scientific Computing in Health
Applications, organized by the INCT-MACC in the LNCC, June 2010.
ii.
monthly online or virtual meetings for discussing ongoing work;
iii.
two visits per year for treating aspects related to the plan of the forthcoming activities;
iv.
physical meetings during national conferences, workshops and seminars;
3.4) Computational and equipment infrastructure
The computational infrastructure employed in achieving the results described in the present
annual report is the following
1) Clusters at the National Laboratory for Scientific Computing (LNCC/MCT – HeMoLab,
ACiMA, MARTIN, ComCiDis Lab), which includes a CAVE setup, Rio de Janeiro
2) Clusters at the High Performance Computing Laboratory in the Institute of Mathematics
and Computer Science (LCAD/ICMC), São Paulo
3) Computational resources at the Engineering Optimization Laboratory in the Federal
University of Rio de Janeiro (UFRJ/OptimizE), Rio de Janeiro
4) Computational resources at the Heart Institute (InCor – HC FMUSP), São Paulo
5) Dynamic and Static Testing Machines (BRASVALVULAS), Digital Torquemeter with
interface for momentum and angle measurement (in operation), Torsion Testing
Machine (INSTRON), Multistation wear simulation testing machine (MTS), Multistation
simulator machine (AMTI), Injection Moulding Machine (ARBOURG), Extensometer
measurements facilities and Optical equipment for measurements of displacements
and strains. Equipment at the Laboratório de Engenharia Biomecânica in the Federal
University of Santa Catarina (UFSC/LEBm), Santa Catarina
6) Computational resources at the Federal University of Uberlandia LAB2/UFU), Minas
Gerais
7) Computational resources at the Federal Fluminense University (MídiaCom/UFF), Rio
de Janeiro
8) Wireless mesh network tested at the Federal Fluminense University (MídiaCom/UFF),
Rio de Janeiro
9) Computational resources at the Federal University of Rio Grande do Norte
(VVRL/UFRN), Natal, RN
10) Computational resources at the Federal University of Paraíba (LabTEVE/UFPB), João
Pessoa, PB.
11) Computational resources at the University of São Paulo (LApIS/USP), São Paulo, SP.
12) Computational resources at the University of Brasilia (EE/UnB), Brasilia, DF.
13) Computational resources at the Pontifical Catholic University of Rio Grande do Sul
(GRV/PUCRS), Porto Alegre, RS.
16
14) Computational resources at the State University of Rio de Janeiro (FCM/UERJ), Rio de
Janeiro
15) Computational resources of the SmartGridLab (SGL) at UFF, Rio de Janeiro
16) Computational resources at C3SL at UFPR, Paraná
17) Computational resources at GREAT, UFC, Ceará
18) New Computational resources being acquired along the year of 2010 (at LNCC)

996 cores (+996 HyperThread), 2.9 TB Mem, 31 TB HDisks / Infiniband (FINEPCIBERESTRU)

1152 cores (+1152 HyperThread), 2.4 TB Mem, 24 TB Hdisks, Gigabit
(FAPERJ-CIBERESTRU_RJ)

144 cores (+144 HyperThread), 3584 GPUs cores NVidia Tesla M2050
3.5) Extranet
As seen in previous sections, the INCT-MACC is a large network embracing researchers from
all over Brazil and abroad. In order to flock these researchers into a common research
framework an extranet system was created. The entire management, both from the research
and financial standpoints, of the project is performed through this integrated system, called
SIG2PTEC, to which all the members have access through http://macc.lncc.br.
The system SIG2PTEC provides an environment for registration and publication of the
activities of the INCT-MACC. This system aims at feeding and managing a database with
general information about the INCT-MACC as well as with the specific activities carried out by
members of the INCT-MACC.
With the centralization of this information in an easy access platform for the INCT-MACC
community, the SIG2PTEC turns also into an efficient management system, allowing the quick
access to associated laboratory reports and the control and assessment of the activities that
are under current development. In this manner, the planning and transparency of the actions
of the different groups is assured because it permits the members to go along with all their
activities and their compliance with the proposed goals.
Figure 3.5.1 presents the initial screenshot in the website of the INCT-MACC (left figure) and
in the extranet system SIG2PTEC (right figure).
In Figure 3.5.2 the layout of the home page of the SIG2PTEC system is displayed. At this
stage the user is allowed to make a control of the R&D activities in which he/she is involved,
as well as to look for the scientific results of other colleagues within the INCT-MACC. At the
same time, the SIG2PTEC makes possible the easy registration of the entire scientific
production of the different members and/or Associated Laboratories (publications in journals,
books, conferences, thesis, dissertations, among others) as seen in Figure 3.5.3. As well, in
this environment it is possible for the researchers to make requests for funding, as shown in
Figure 3.5.4.
17
Figure 3.5.1. Access to the INCT-MACC website -http://macc.lncc.br- (left) and to the system
SIG2PTEC (right).
Figure 3.5.2. Layout of the home page of the SIG2PTEC system.
18
Figure 3.5.3. Registration of scientific production through the system SIG2PTEC.
Figure 3.5.4. Funding request through the system SIG2PTEC.
4) Science and technology highlights
In this section we succinctly describe and exemplify the most important contributions
achieved by the Associated and Collaborator Laboratories in the five areas of expertise of the
INCT-MACC. All these contributions were published in international journals and also
presented in national as well as international conferences (see Section 5). Moreover they
form part of original and innovative research activities under development in doctoral theses
and master dissertations supervised by members of the INCT MACC research team (see
Section 5.4). Furthermore, all of them were incorporated in the software already developed
(see Section 5.7).
19
4.1) Area 1: Modeling of physiological systems
4.1.1) Computational hemodynamics via dimensionally-heterogeneous models
In the last decades, computational modeling has proven to be a powerful tool to study several
aspects related to blood flow in vascular vessels and their clinical implications. Indeed, it is
increasingly recognized that local hemodynamics conditions play an important role in the
onset and development of atherogenic lesions. Although the specific phenomena are not so
well established, several studies show that atherosclerotic lesions are prone to occur in zones
where blood flow disturbances are present. This imposes the necessity of conducting studies
where the flow and pressure fields must be determined accurately, as these processes
depend on the local hemodynamic state, but, frequently, in-vivo experiments are difficult to be
conducted without inconveniencing patients. In contrast, computer models make these
studies possible with increasing realism and often in conditions that are not reachable with invivo experiments.
A significant step in hemodynamics simulations was given with the introduction of coupled
3D–1D models. In these works the 3D and the 1D models work together in order to
accommodate the complex interaction between the phenomena modeled by each
subcomponent of the whole system. A common point in those works is the special treatment
of inflow and outflow ‘‘adaptive boundary conditions’’ for the 3D models, actually coupling
conditions, over the so-called coupling interfaces. In this kind of formulation, once the 1D
model has been calibrated it is used as an automatic self-adaptive boundary conditions
supplier for the 3D portion of the model to gain adaptability to varying contexts. This is
possible because the ‘‘isolation process’’ is circumvented due to the integration of the
coupling conditions into the formulation.
In view of the complexity of the cardiovascular system, multi-scale modeling arises, a priori,
as a natural approach. In such an approach it is possible to couple three elements of interest
in the analysis:
 all the complexity of 3D blood flow circulation in specific arterial districts such as
bifurcations, tortuous vessels, valves among others;
 all the complexity of the systemic response, for example, for a given heart beat (the
input) the shape of the arterial pulse is obtained (the output), which conveys the
information associated with the multiples interactions of the pressure pulse at a
systemic level of integration;
 all the influence of the peripheral beds, taking into account the peripheral resistance
and compliance that determines the overall state of the microvascular network as well
as rules for blood flow distribution.
Incorporating all these three elements in the definition of a multi-scale model is a crucial point
when the goal is to analyze the interaction among local and global dynamics (phenomena) of
the cardiovascular system.
Figure 4.1.1 presents schematically the coupling among 3D, 1D and 0D models. Notice that
the 3D models are defined wherever it is needed to gain insight in the hemodynamics details,
whereas the 1D model comprises the major arteries of the arterial system and can be formed
(as in the present work) by hundreds of arterial segments. Finally, the 0D models behave as
boundary condition providers for the 1D model, representing the action of the peripheral beds.
The comments made in the previous paragraph give us another way of understanding these
coupled models. Say, 1D-0D models can be understood as auto-adaptive boundary condition
20
providers for the 3D ones, and viceversa. In any case, note that with the multi-scale approach
we have a model capable of facing a wider class of physical situations. Some examples that
involve a considerable complexity are the following:
 Hemodynamics simulations under non-periodic physiological regimes in arterial
districts, that is changes in the mean values of flow rate and pressure with time.
 Simulation of surgical procedures to study the outcomes of a modification in the
vasculature in terms of local/global hemodynamics (for example, procedures affecting
the dynamics of the circle of Willis).
 Simulation of heart beat speed up, or simulation of the increase in the volume of blood
ejected at each cardiac beat, and the influence on the blood flow circulation at certain
geometrical singularities.
 Simulation of the influence of arterial reconfiguration (aneurysm formation, arterial
occlusion, etc.) on the global and local hemodynamics.
All the situations stated above are in fact different kinds of changes in the cardiovascular
system that can be accounted for through the use of an integrated 3D-1D-0D model.
Figure 4.1.1. Heterogeneous model of the cardiovascular system (3D-1D-0D coupled models)
and specific case of an abdominal artery embedded in a 1D-0D model of the arterial network.
The example of Figure 4.1.1 (right) is a situation where patient-specific information provided
by MRI images is used to define the geometry of the domain. Specifically, the district
corresponding to the iliac bifurcation was taken from a volume data set of medical images and
was embedded in the 1D model of the arterial tree. In this case we analyze the physiological
blood flow at the iliac bifurcation considering different spatial discretizations, paying attention
to local and global results (see Figure 4.1.2).
21
Figure 4.1.2. Flow rate and pressure at the inlet (left) and outlet (right) of the abdominal aorta.
The differences commented above have also an effect on the OSI and WSS indicators as can
be seen in Figure 4.1.3. Although the differences in the oscillating behavior of shear stresses
are small, and the main characteristics of the OSI structure are maintained in the three cases,
it can be seen that the mean value of shear stresses is considerably underestimated as the
grid is coarser (up to a 50% below in the maximum value between cases (i) and (iii), at the
dividing wall of the bifurcation).
Figure 4.1.3. OSI indicator (from left to right cases (i), (ii) and (iii)).
In addition it is possible to compute the trajectories of particles in order to determine another
important hemodynamic index, the residence time. Thus, as shown in Figure 4.1.4, we
compute the particle pathlines in order to study the effect of geometrical disturbances in the
main blood flow stream. In such figure, we appreciate a strong recirculation region in the
middle part of the arterial segment as a result of the presence of an incipient dilation. All these
computations have been carried out under physiological regimes, providing an accurate
framework for studying all the characteristics of the blood flow in patient-specific vessels.
22
Figure 4.1.4. Particle pathlines in the abdominal artery throughout the cardiac cycle.
Classically, the problem of coupling dimensionally-heterogeneous models has been treated
via Dirichlet-to-Neumann iterations with relaxation. Nevertheless, it is well-known that tuning
the iterates by setting proper relaxation parameters does not follow a general procedure and
situations for which the algorithm fails may be found easily.
Particularly, the motivation for setting robust iterative strong coupling techniques lies in the
need for employing well validated codes suitably devised for systems with very different
dynamics as black boxes. The technique presented here can be understood as a domain
decomposition approach where the partitioning takes place at the coupling interfaces among
models of different dimensionality.
Basically, the original monolithic problem is understood as an interface problem in terms of
interface variables. The reinterpretation of the Dirichlet-to-Neumann algorithm applied for this
problem as a Gauss-Seidel method plays a main part since it allows us to change the
resolution process by more robust and sophisticated iterative methods. Therefore, it is
possible to make use of the classical GMRES iterative procedure as an approach to solve the
problem. Here, the extension to non-linear problems is carried out by employing the Broyden
method as well as the Newton-GMRES algorithm. This procedure encompasses even more
resolution aspects when dealing with time-dependent problems, like in the examples
presented here.
The main application for which such strategies have been envisaged is in the hemodynamics
field. This example is presented with the aim of demonstrating the potentialities of the present
ideas in dealing with highly complex heterogeneous systems of utmost relevance in
computational mechanics. Such application consists in studying the blood flow in the arterial
network corresponding to the arm. This application makes use of all the ideas developed so
far. Nevertheless, the problem addressed in this section embodies slightly different physical
components. The simple components (or simple models) are 1D models for the flow of an
incompressible fluid in compliant vessels, while the complex components (or complex
models) are 3D Navier-Stokes equations in compliant domains.
23
An example of a coupled system can be seen in Figure 4.1.5. The 3D-1D model of the arm is
set up starting from the subclavian artery and following the main branches found in the arm. It
consists of a 1D model for which the five larger bifurcations have been replaced by 3D
geometries. Also, an inflow boundary condition is prescribed, as shown in Figure 4.1.5,
whereas at the outflows boundaries 3-element Windkessel models are considered. The
boundary condition is taken from a pure 1D model of the whole arterial tree. It is noticed that
the interface problem consists of 15 coupling interfaces, and then 30 interface unknowns (flow
rate and pressure). In this problem we have considered Neumann boundary conditions in
every coupling interface of the 1D and 3D models.
Figure 4.1.5. Blood flow in the arm involving heterogeneous (3D and 1D) models.
The results for the coupling unknowns at the interfaces among the 3D and 1D models are
also presented in Figure 4.1.5. There, the results (pressure and flow rate) at the inlet of each
one of the five 3D bifurcations are displayed. In that figure, the propagation phenomenon can
be noticed clearly as a result of the compliance of the models employed in the simulation.
Finally, the magnitude of the velocity field at several time instants throughout the cardiac cycle
is featured in the same figure. In the present case we made use of standard 3D geometries
for simplicity in setting up the example, furthermore, for such simple cases the complexity of
the blood flow at those locations is noticeable. Indeed, Womersley-like velocity profiles due to
inverse pressure gradient and recirculation regions are observed around t = 0.3. The use of
the present methodology for dealing with the coupling of 1D models and patient-specific 3D
geometries obtained from medical images is straightforward, not entailing further issues from
the point of view of the convergence of the iterative methods proposed here.
Ongoing work
The future tasks to be performed in this area involve the extension of the computational
models and numerical methods developed so far to deal with large-scale problems involved in
the simulation of blood flow in the cardiovascular system. Such large-scale problems are
characterized by several 3D models coupled with the 1D model of the arterial tree, and each
of the 3D models featuring above 1 million of degrees of freedom.
24
As a complementary activity to that mentioned above we have the application of the
developed methodologies, involving heterogeneous models, to real-life applications,
particularly to the assessment of aneurism rupture. This is expected to be accomplishes by
assembling a database combining geometric and hemodynamic indexes. This is a statistical
study involving patient-specific information in order to provide the neurosurgeon with
additional information retrieved from numerical simulations.
4.1.2) Closed-loop models of the cardiovascular system
A computational model of the entire cardiovascular system was also developed accounting for
specific vessels, systemic arteries, systemic veins, pulmonary and heart circulation and real
valve functioning. In this context it is possible to perform the integration of different levels of
circulation. This approach is usually recognized as multiscale modeling of the cardiovascular
system. Hence, the arterial tree is described by a one-dimensional model in order to simulate
the propagation phenomena that take place at the larger arterial vessels. The inflow and
outflow locations of this 1D model are coupled with proper lumped parameter descriptions (0D
model) of the remainder part of the circulatory system. At each outflow point we incorporate
the peripheral circulation in arterioles and capillaries by using a 0D three-component
Windkessel model. In turn, the whole peripheral circulation converges to the venous system
through the upper and lower parts of the body, which corresponding 0D models for such parts.
Then, the right and left heart circulation, as well as the pulmonary circulation is accounted
also by means of 0D models. Particularly we point out the modeling of the four heart valves,
which is carried out by using a non-linear model that allows for the regurgitation phase during
the valve closing. Finally, the 0D model of the left ventricle is coupled with the inflow boundary
in the 1D model, closing the cardiovascular loop. The entire 0D model that performs the
coupling between the outflow and inflow in the arterial tree consists of 14 compartments.
Eventually, we can consider the existence of 3D models accounting for the detailed aspects of
blood flow in specific vessels of interest. The resultant integrated model (0D-1D-3D coupled
model) forms a closed loop network capable of taking into account the interaction between
the global circulation (0D-1D Models) and the local hemodynamics (3D models). In order to
sum up, this is carried out by putting together the following mathematical representations:
1. 1D Models for the larger systemic arteries;
2. 0D Models (R-C windkessel models) for the arterioles and capillaries;
3. 0D Models (R-L-C models) for venules and veins to model the upper and lower body
parts;
4. 0D Models (R-L-C models) for inferior and superior vena cava, pulmonary veins and
pulmonary arteries;
5. 0D models (elastance models) for each of the four heart chambers;
6. 0D Models (non-linear non-ideal diode models) to approximate the behavior of the
tricuspid, pulmonary, mitral (bicuspid) and aortic valves; and
7. 3D Models for the specific vessels of interest.
Figure 4.1.2 presents a scheme of the model developed so far. This model is still undergoing
calibration in view of the large number of model parameters involved in the model for which
experimental measurements are not available.
25
Figure 4.1.6. Closed-loop model of the cardiovascular system.
In Figure 4.1.7 some results involving flow rates and pressures at the inlet and outlets (the
later placed at the lower and upper parts of the body) are presented. It is possible to observe
that now the pressure at the capillary level is not constant, but governed by the arterialvenous coupling. The same holds for the results at the aortic root (inlet), which is the result of
the cardiac-arterial coupling. Particularly, the model for the valves implemented in this case
accounts for regurgitation, from which it is allowed to model valves evidencing malfunctioning
at opening/closing phases. In Figure 4.1.8 left we observe this phenomenon taking place in
the four valves. On the right of Figure 4.1.8 the opening angle of the valve is displayed, which
allows us to see the dynamics of the valve functioning throughout the cardiac cycle, which is
an unknown in the present model (depending on the pressure and flow rate across the valve).
26
Figure 4.1.7. Results at inlets and outlets in the closed-loop model.
Figure 4.1.8. Flow rates across the four cardiac valves. Models account for regurgitation.
Ongoing work
In this regard the developments of the INCT-MACC for the next year involve the incorporation
of autoregulation mechanisms to the current version of the closed-loop model of the
cardiovascular system. This entails identifying the different control mechanisms to classify
them into short, middle and long term control mechanisms. With this, it will be possible to set
up truly general numerical simulations of the cardiovascular system.
4.1.3) Mesh generation
Geometrical data structures play a key role in the computer treatment of images, meshes and
simulation codes. Explicit ones are well established, such as Singular Half-Edge and HalfFace. They are intuitive and self-explanatory, at the expense of large memory requirements.
These structures have been recently extended so as to deal with mixed meshes (tetrahedra,
prisms, hexahedra and even singular objects). A topic of current interest is that of implicit data
27
structures, in which adjacencies are obtained from algebraic rules as is frequent in one
dimension. In fact, the adjacency of a mesh of two-node segments can always be reordered
in such a way that the two nodes of segment i are I and i + 1. Can a 3D mesh be reordered in
such a way that its adjacency structure results from some algebraic rule of the same kind? A
step in this direction is the Opposite Face structure based on the Corner-Table structure. The
savings in computer memory are obvious, but the robustness of the algorithms is the subject
of ongoing research. Independently of the data structures, imaged-based mesh generation
and adaptivity is central for the project. One of the followed approaches corresponds to
algebraic meshing from implicitly defined surfaces (the surface is defined as the zero of some
known function). Two techniques are under considerations, based, respectively, on Isosurface
Stuffing techniques and on vertex movement and optimization.
Another important line of development corresponds to the imesh algorithm and Bézier-based
refinement. Imesh aims at generating meshes directly from images, without any sophisticate
segmentation step. The idea being explored is based on statistical operators for the purpose
of feature and texture extraction. Some results of imesh applied to anatomical structures of
potential interest can be seen in Figure 4.1.9. Ongoing work is aimed at developing
refinement algorithms based on templates and guided by Bézier surfaces and also at the
application of the developed techniques to mesh arterial vessels to perform numerical
simulation of blood flow.
Figure 4.1.9. Mesh of the human thorax generated by Imesh.
28
4.1.4) Geometrical modeling
Visualization and analysis of image data sets
Content-based Image Retrieval systems and other image-based applications rely upon a set
of features that suitably define the image set of interest. However, the definition of such a set
of features for an arbitrary image dataset is still very challenging. In the context of pattern
recognition this problem is known as Feature Selection, which is tackled mainly by statistical
methods or artificial intelligence techniques. At the INCT-MACC an alternative approach is
being followed by some Associated Laboratories, in which the Feature Selection is done
interactively in a visual environment based on projections. Preliminary results are
encouraging, since with very little and intuitive user intervention the improvement of the
obtained feature sets is significant.
Computer representation of surfaces
The disparity of forms in which medical data are produced make it very attractive to
implement a computer representation of geometrical data based on the simplest data
structure: Unconnected point sets. An important effort has been devoted to this strategy,
elaborating upon ideas such as Partition of Unity Implicits and Algebraic Moving Least
Squares. In some developments, the representation was further exploited for point-based
front tracking, and more effective implementations have been developed in this first year of
INCT-MACC so as to update and resample arbitrary moving surfaces.
4.1.5) Simulation of complex flows with moving interfaces
Computational rheology
Within the INCT-MACC, there are groups with a long tradition in computational rheology, and
this expertise is incorporated into the project. Recent activity in this area deals with the state
of the art in complex fluid simulation, in particular the numerical treatment of highly elastic
second-order fluids and of liquid crystals. In the latter case, the fluid is subject to a magnetic
field that allows the fluid to be actuated from outside the flow domain with no moving parts
involved. From the numerical viewpoint, recent advances have focused on the implementation
of the log conformation technique to improve the robustness of the codes.
High-resolution upwinding in transport and turbulence modeling
This topic is classical in computational fluid dynamics and necessary for the objectives of the
project. New applications, as those emanating from medicine, require the adaption of existing
numerical techniques to the specific problems. Ongoing research on state-of-the-art transport
algorithms focuses on specific numerical algorithms (ADBQUICKEST and TOPUS -Third
Order Polynomial Upwind Scheme-). Filtration of nanoparticles, fluidized bed reactors and
turbulent heat transfer have been the main applications in the last period.
Finite element methods for problems with interfaces and capillarity
Microfluidics is another key point in order to account for accurate modeling of complex
phenomena taking place in the blood flow, since medical diagnosis and drug delivery, among
others, are areas in which microflows play an increasing technological role. In microfluidics
the capillarity effects are dominant, and as a result the numerical simulations are challenging.
Two lines of research are being followed: An ALE moving-mesh treatment and a levelset
29
treatment of the interface, both with finite element discretization. The most recent results
concern the development of a new finite element space especially designed for the pressure
discontinuities that arise from surface tension effects, the Laplace-Beltrami discretization of
the surface tension force, and the numerical implementation of the Generalized Navier
Boundary Condition for dynamic contact angles.
4.1.6) Quantification of left ventricle movement and applications
The description and quantification of the regional function of the cardiac left ventricle (LV) is a
fundamental goal of many modalities of cardiac imaging and image analysis methods. These
tasks involve making accurate and reliable quantitative measurements of the heart, which is a
non-rigid moving object that rotates, translates, and deforms in three dimensions (3D), over
the entire cardiac cycle. The normal Left Ventricle (LV) wall deformation occurring throughout
the cardiac cycle may be affected by cardiac diseases. Some pathological conditions could
then be identified by the change they produce in the expected normal movement
In this project, we started to address various phases involved in the process of quantification
of three-dimensional dynamic SPECT images: quantification of cardiac motion,
representations that capture relevant characteristics of the problem for their analysis and
clinical applications. The activities developed in this period within the context of the modeling
of complex physiological systems are related to the refinement of algorithms to estimate the
velocity field for evaluation of dynamic three-dimensional movement and the evaluation of the
methods.
In order to estimate the velocity field we make use of electrocardiographic gating of cardiac
images which provides the ability to track the wall motion by acquiring sets of volumes in
different phases of the cardiac cycle, as shown in Figure 4.1.10.
Figure 4.1.10. Acquisition process of gated cardiac images. The heart period is divided into a
number of frames. For each frame a volume data is acquired.
The method for description of cardiac motion studied in the project uses a velocity vector field.
The velocity fields are obtained by using a classical 2D optical flow scheme, based on Horn
and Schunk´s formalism, extended to voxel space. This classical scheme has two constraints:
30
(1) the intensity of a given pixel is conserved between the image frames; (2) pixels in a
neighborhood have similar velocities.
The two assumptions are combined in a weighted function. The minimization of this function
leads to a linear algebraic system, whose solution is the velocity component to each pixel and
the coefficients are determined by the spatial and temporal derivatives of the images. The
solution of such equations enables the determination of the velocity vector for each voxel at
each time frame of the cardiac cycle. Figure 4.1.11 presents the velocity field estimated for a
normal heart.
Figure 4.1.11. (a) A view of the 3D left ventricle image of a normal heart. (b) The
corresponding estimated velocity vector.
Observing this figure one can easily notice that the analysis of the velocity field from all voxels
is extremely complex, due to the high amount of information presented simultaneously. To
make this bunch of information useful for diagnostic purposes, it is necessary to find a
compact form of presenting it. The rest of the quantification process, which aims at facilitating
the visualization of the data obtained from the estimation of the velocity field is given by the
following activities:

proposal for a visualization scheme for semi-quantitative analysis of the velocity
components;

proposal for a representation of functional polar map with speed data for semiquantitative analysis of movement of the left ventricle;

evaluation of the methods by means of (i) assessment of the use of the proposed
representation to characterize some aspects of Cardiac Resynchronization Therapy,
that is used for ventricular dyssynchrony, (ii) evaluation of the algorithms by using
mathematical phantom, (iii) preliminary evaluation using physical phantom.
These activities correspond to the area of medical images and are described in more detail in
the Section 4.2.8.
4.1.7) Bone trauma and its applications
In the context of bone trauma several activities have been carried out with the purpose of
modeling the mechanical response of prosthesis and materials under different load
31
conditions. In the following sections we summarize these developments.
Constitutive modeling of soft tissues. Viscoelasticity and damage
Soft biological tissues are frequently formed by a fibrous collagen net imbedded in an
extracellular substrate. The study of constitutive models capable of simulate the behavior of
these kind of materials is the subject of a research line of the LEBm. Viscoelastic effects has
been studied and a variational model for the representation of fiber reinforced soft tissues has
been proposed. The efforts now are aimed at obtaining the inclusion of damage evolution.
The applications of these kinds of models focus ligaments, tendons, intervertebral disks
among others. Figures 4.1.12 and 4.1.13 present some results in specific applications.
Figure 4.1.12. Simulation of a viscoelastic membrane with reinforcements in x-direction
Figure 4.1.13. Simulation of a viscoelastic tube with fiber reinforcement (30 degrees with
respect to axial direction). Rotation obtained due to traction.
Optical measurements of displacements and strains by DIC – Digital Image Correlation
The selection of appropriate constitutive models is an essential step for the success of the
numerical simulation of mechanical systems. Moreover, these material models do need the
appropriate identification of material parameters, which is frequently performed in a direct or
indirect way by means of experimental testing. Soft materials (elastomers, soft tissues) and
32
non elastic behaviors (plastic strains, necking, localization, etc.) usually allow for finite
localized strains that preclude the use of conventional contact measurements techniques like
extensometers or clip gauges. Moreover, these conventional techniques usually provide
measurements in discrete points but not the complete field in a finite region.
A possible option for these cases is the use of optical tools. We focus here the technique
called Digital Image Correlation – DIC. The main idea of this technique is to follow a graphical
pattern within a temporal sequence of images (film).
Firstly, a graphical pattern is drawn on the surface to be measured. This can be done, for
example by a black spray on a previously painted white surface (see Figure 4.1.14). On this
surface a set of points and squared areas around them is chosen. This set of points will be
followed along the sequence of images and their displacements calculated. During the
specimen deformation these squared areas deform occupying new places on the subsequent
images. It is then possible to define a one-to-one operation that map each squared areas at
time t0 to a new position at time tn. The identification of the mapping parameters it is done by
maximizing the image correlation between the experimental image obtained at time tn. Finally,
conventional finite elements concepts and filtering techniques allow for the calculation of inplane and out-of-plane strain fields. Figure 4.1.15 features some preliminary results obtained
with these techniques.
Figure 4.1.14. Mapping of the original squared area. The maximum correlation between the
experimental deformed image and the mapped one allow for the displacement calculation.
33
Figure 4.1.15. Displacement field obtained by DIC. a) and b) are the images of the deformed
specimen captured by two simultaneous cameras. c) 3D surface reconstruction of the original
mesh by means of the measured displacements. d), e) and f) show the displacements fields in
x, y and z directions respectively.
The way this displacement and strain field information is applied to the parameter
identification of material models is the focus of the work in progress. This includes classical
testing cases in specimens (traction, compression and torsion) and finite element simulations
of the experimental tests.
Evaluation of orthopedic surgical techniques
The study of the mechanical behavior of the lumbar spine before and after the execution of an
arthroplasty is performed. The surgical intervention consists on the fixation of two or more
vertebrae by titanium rods and pedicle screws, promoting their arthrodesis (fusion). The aim
of this treatment is to relax the spinal structures leading to an important decrease of back
pain. For the simulation stage, it is necessary to define material properties for bone,
intervertebral discs, facet joints and ligaments. The experimental analysis will be performed
with the aid of in-vitro tests in order to compare and complement the numerical data. It is
hypothesized that the use of a vertebral fixation device with PEEK rods instead of titanium
rods would help to achieve a much less rigid behavior in the operated motion segment, which
would involve less degeneration effect in the adjacent spinal levels, while maintaining an
proper stabilized condition needed for bone fusion.
A finite element model of the complete lumbar spine considering the vertebrae, intervertebral
discs, facet joints and ligaments was created (Figure 4.1.16). The vertebrae are considered
as an orthotropic material. The modeled intervertebral disc consists of the nucleus pulposus
and the surrounding annulus fibrosus. The nucleus is considered as a hyperelastic,
incompressible material and the annulus as a non-linear, non-homogenous composite of
collagen fibers embedded in a ground substance. Unidirectional elements with non-linear
material properties are assigned to the ligaments. The facet joints are represented as
frictionless contact surfaces.
34
After modeling the healthy case for control (physiologic condition), a vertebral fusion with an
internal fixation was applied to the model. Intradiscal pressure, relative rotation of the
vertebrae, facet joint forces and the helical axis of rotation are analyzed for the healthy and
for additional cases: the use of internal fixations with titanium and PEEK rods.
Figure 4.1.16. Geometry of the lumbar spine (L1-L5) and finite element meshes.
Development of orthopedic bioabsorbable implants
Surgeries for knee ligamentar reconstruction make use of components like interference
screws and transversal rods. Traditionally made of titanium, these components are being
progressively substituted by bioabsorbable polymeric ones. The present project focuses the
design, analysis and fabrication of a set of these components for an appropriate surgical
insertion and post-operatory behavior. Simulation, mechanical testing and surgery ex-vivo
take part of this research held in cooperation with a Brazilian company of orthopedic implants.
In this first stage, the design of a new interference screw able to be used at femoral and tibial
side of the knee is carrying out. The selected polymer was PLDLA 70/30 due adequate
mechanical properties degradation time, about one year. This time is sufficiently higher that
the time needed for bone-tendon healing inside the bony tunnel after the substitution of ACL –
Anterior Cruciate Ligament by a natural substitute (patellar tendon or hamstrings). To be able
for guarantee security and efficacy for the medical device, the mechanical properties of the
screw has been evaluated using the finite element method and the mechanical demands put
on the device at the surgical act was estimated through biomechanical tests where the screw
insertion was replicated (see Figures 4.1.17 and 4.1.18).
35
Figure 4.1.17. Biomechanical function of interference screw during ACL reconstruction. The
screw act as a wedge compressing the graft against bone.
Figure 4.1.18. Left. Examples of commercial interference screws for ACL reconstruction.
Right. Biomechanical performance in vitro evaluation of a interference screw
4.1.8) Other research lines in this area of expertise
Several lines of research have been matter of exhaustive study within the context of the
INCT-MACC and as complementary studies to the ones detailed in the previous sections.
Here we enumerate the more relevant ones.
Immersed methods and applications
The geometrical complexity of biological systems makes it impossible to build meshes that
conform to all the details, especially when there are moving/deforming parts. In view of this,
some research effort is oriented towards the treatment of immersed boundaries, i.e.,
boundaries that are not followed by the simulation mesh. Two approaches have been
followed. One of them, developed by members of the INCT-MACC, is based on switching the
cells cut by the interface to a Discontinuous-Galerkin approximation and on them imposing
36
the interface conditions strongly. In the first year of the INCT-MACC this approach has been
improved and extended to nonlinear elasticity. An alternative approach is based on the use of
the “exterior” nodes to impose the boundary condition in a least-squares sense. This
approach has been implemented and extended this past year within the research activities
carried out in the context of the INCT-MACC.
Thus, immersed methods appear in the context of complex fluid-structure interaction
problems as an alternative solution to deal with complex structural kinematics in a simplified
manner. As said above, the group of the INCT-MACC has tackled several problems related to
the theoretical formulation and practical implementations of immersed methods. The set up of
a variational formulation for the immersed problem was the first approach, which was followed
by simple applications. Figure 4.1.19 shows its application to the modeling of the opening and
closing phases of a valve.
Figure 4.1.19. Modeling of a valve mechanism using immersed methods.
The current developments involve the combination of Eulerian and Lagrangian methods for
solving the fluid-structure problem in moving meshes. Also, the use of Discontinuous Galerkin
methods is being explored.
Constitutive modeling of arterial walls
The constitutive modeling of arterial wall behavior can be handled through several
approaches: (i) purely phenomenological constitutive models, (ii) layer-based
phenomenological models, (iii) histological-based phenomenological models and (iv)
multiscale models. The current developments involve the four approaches at different levels.
Approaches (i)-(iii) have undergone a comprehensive theoretical study, and the
corresponding computational implementations have been tested. This has been developed
within a generic framework for finite strain analysis aimed at dealing with any kind of
constitutive response derived from a strain energy function. This was possible due to the use
of finite differences in computing the stress (second order) and tangent (fourth order) tensors
starting from the strain energy function. Figure 4.1.20 displays the analysis of a small arterial
segment subjected to a wide range of values of internal pressure. Two constitutive models are
37
compared, a phenomenological model (type (i)) and a structurally- layered model (type (ii)).
The advantage of such an approach lies in the rapid model prototyping, which is very
desirable in view of the wide variety of possibilities in setting different constitutive models.
Figure 4.1.20. Constitutive responses of two complex elastin-collagen models.
In turn, the approach (iv) has been studied from a theoretical standpoint. This study rendered
very interesting results about the potentialities and predictive capabilities of multiscale
modeling.
The ongoing work in this line of research is the application of these complex constitutive
models to large scale problems for modeling blood flow in patient-specific arterial vessels.
Moreover, the study of multiscale methods for soft tissues is also a matter of current study.
Indeed, Figure 4.1.21 presents the model of a representative element of arterial wall featuring
an elastic matrix and collagen fibers. As well, the figure shows the relation pressure-radius for
an arterial segment subjected to axial stretch and internal pressure with such a constitutive
behavior.
Figure 4.1.21. Preliminary multiscale model of an arterial wall including elastin and collagen.
38
Lattice-Boltzmann methods in computational hemodynamics
Lattice-Boltzmann methods (LBM) appeared as alternative methodologies to solve the flow of
incompressible fluids. Assuming the blood as a Newtonian fluid (hypothesis is valid for the
major arterial segments) it is possible to employ the LBM for this application. The motivation is
in the possibility of developing scalable solvers for ultra-scale simulations of blood flow in
patient-specific vessels. This is viable due to the fact that the LBM is an explicit method (with
an intrinsic parallel nature). Figure 4.1.22 shows the interaction of the degrees of freedom of a
LBM cell with its neighborhood (collision). At the following time step, the quantities
(distributions) are transported through the lattice velocities (propagation) and the process
starts again.
Figure 4.1.22. In the LBM collision and propagation phases are highly parallelizable.
We have developed several models for incompressible flow based on the LBM and have
applied to several applications within the blood flow regime. For instance, in Figure 4.1.23 we
present the results obtained with the LBM in two cases: (i) an arterial bend, with the
corresponding characteristic flow as a function of the Dean number and (ii) a vessel with
aneurism, and the characterization of recirculation in the inner part of the aneurism. In the
later case we specifically computed several hemodynamics quantities of interest and try to
characterize their behavior as a function of the flow regime, that is, Reynolds and Womersley
numbers.
Figure 4.1.23. Studies conducted with LBM for arterial bending and intra-aneurismal blood
flow.
39
Currently we are exploring the fluid-structure interaction problem based on the LBM with the
aim of modeling blood-valves interaction and also for modeling the presence of suspensions
in the blood flow. As well, this kind of models is suitable for simulating coagulation processes
and particle tracing calculations, for which these two topics are also matter of current
research.
In vivo identification of material properties in coronary arteries
The modeling of anatomical structures relies on the data available to feed the mathematical
representations. In this regard, the identification of material parameters from medical images
constitutes a crucial step. In order to do this, it is necessary to combine several steps in the
identification process, namely: (i) quantification of velocity field of anatomical structures of
interest, (ii) calculation of strains and synchronization with applied loads due to
hemodynamics regimes and (iii) setting of inverse problem to estimate material parameters.
The research activities in this year were directed to the first approaches in steps (i) and (iii). In
the step (i) some optical flow methods have been developed implemented with the aim of
retrieving the velocity field from a sequence medical images representing the movement of
structures. This is accounted for in the first implementations via iterative methods based on
finite differences for a minimization problem. In this minimization problem the goal is to
compute the computed velocity able to minimize a given criterion based on the material
derivative of pixels, assuming that the intensity of pixels between two images is constant.
Figure 4.1.24 shows the results obtained for a synthetic scenario.
Real velocity field.
Estimated velocity field.
Figure 4.1.24. Estimation of velocity field from a sequence of images.
On the other hand, for the step (iii) some optimization algorithms were developed and
implemented in order to estimate, from a given strain field and known loads, the material
parameters such that a given criterion that evaluates the gap between the data and the
solution given by the model is minimized. These algorithms, based on the concept of
sensitivity analysis, advance in the problem using quasi-Newton iterations combined with
local searches. Such methodologies are able to compute the exact material parameters in
test cases, as shown in Figure 4.1.25, where the estimation of three regions with different
material parameters is carried out. On the right in that figure we observe the iterations to
reach convergence for different variants of the algorithm.
40
Figure 4.1.25. Estimation of material parameters for a pipe with internal pressure.
Among the current developments we are considering different numerical techniques for the
estimation of the velocity field (step (i)). As well, with this estimated field the next task to
perform is the calculation of strain rates and then the synchronization with the internal loads
for the case of arterial vessels. In the case of arterial vessels, the sequence of medical
images is obtained from IVUS (Intravascular Ultrasound) exams, from which the optical flow
techniques are applied. In addition, the data acquisition process is synchronized with a
catheter to retrieve the internal arterial pressure with the purpose of having at hand the
internal loads the structure is subjected to. This is the ongoing work in step (ii). Finally, with
such combined information we will be able to feed step (iii) with enough real-life data for
solving the inverse problem and finally estimate the material parameters (step (iii)).
4.1.9) Developed innovative technologies
As a result of the research and development activities, three major innovative technologies
have been developed, namely HeMoLab, ImageLab and Imesh softwares.
S1) Beta version of the HeMoLab software for the modeling of the cardiovascular human
system
This version of the HeMoLab makes possible to model the blood flow through
1D, 3D and 3D-1D models, together with the possibility of using patient specific
information of arterial districts coming from medical images.
A preliminary validation has been carried out over this software within the Postgraduation Program in Cardiology in the Universidade Federal do Rio de Janeiro. The
results of this pilot evaluation, which was performed with M.Sc. and D.Sc. Cardiology
Students, were satisfactory. Throughout 2010 several additional seminars are planned
with the aim of giving continuation to the validation phase.
Figures 4.1.26, 4.1.27 and 4.1.28 present several screenshots of the use of
HeMoLab in modeling the cardiovascular system.
41
Figure 4.1.26. Handling of 1D models for the cardiovascular system within HeMoLab.
Figure 4.1.27. Handling of 3D and 3D-1D models for the cardiovascular system within
HeMoLab.
Figure 4.1.27. Numerical simulations of blood flow in patient-specific vessels using HeMoLab.
S2) Beta version of the ImageLab software for medical image processing
This version of the ImageLab incorporates the entire ITK library of filters for
image processing. Moreover, several filters specially devised for medical images within
the HeMoLab group has been included. Such filters aim at handling medical images
coming from computerized tomography, magnetic resonance and ultrasound.
Particularly, this version of ImageLab allows the visualization and processing of
medical images coming from IVUS exams.
This software was partially evaluated by Dr. Marcelo Hadlich from Federal
University of Rio de Janeiro. This assessment entailed the comparison of medical
reports performed using the visualization tools provided by ImageLab with the medical
42
reports provided by a commercial software from Philips. The results were highly
satisfactory, reaching high agreement between the reports performed using both
softwares. Throughout 2010 other versions of ImageLab will be released including new
functionalities and optimizations.
Figures 4.1.28 and 4.1.29 show different views of the ImageLab software while
handling medical images.
Figure 4.1.28. Handling medical images (DICOM format) within ImageLab for segmentation
and structure recognition.
Figure 4.1.29. Handling IVUS-acquired images within ImageLab for assessment of coronary
occlusion and parameter identification.
S3) Beta version of the Imesh software for mesh processing
This version of Imesh allows to generate and optimize spatial discretizations
which are suitable, for instance, for finite element analysis. In this regard, the software
permits to mesh 2D and 3D structures using different methodologies and optimization
algorithms. As well, the software makes possible the generation of different kinds of
elements according to the needs of the user.
Currently, it is intended to integrate this software together with the HeMoLab
software in order to provide, within HeMoLab, more robust and powerful generators of
finite element meshes, which are used a posteriori by HeMoLab to carry out numerical
simulations of blood flows.
Figure 4.1.30 display a view of the results obtained with Imesh in meshing a
cerebral structure obtained from patient-specific data.
43
Figure 4.1.30. Mesh generated with Imesh from tensorial data of the human brain.
4.2) Area 2: Medical image processing
4.2.1) Segmentation, feature extraction and registration
Segmentation is a fundamental step in image analysis. From a practical point of view, it is the
partition of an image into multiple regions (sets of pixels) according to some criterion of
homogeneity of features such as color, shape, texture and spatial relationship. These
fundamental regions are disjoint sets of pixels and their union composes the original whole
scene. Approaches in image segmentation can be roughly classified in: (a) Contour Based
methods, like snakes and active shape models; (b) Region based techniques; (c) Global
optimization approaches; (d) Clustering methods, like k-means, Fuzzy C-means, Hierarchical
clustering and EM; and (e) Thresholding methods.
Among these approaches, thresholding techniques (compute a global threshold to distinguish
objects from their background) are simple for implementation, with low computational cost,
been effective tools to separate objects from their backgrounds. We propose a generalization
of the Albuquerque’s algorithm that applies the non-extensive Tsallis entropy for thresholding.
Figure 4.2.1 shows a result obtained by this technique for breast lesions segmentation for
ultrasound images.
44
Figure 4.2.1. Original ultrasound malignant image (left) and the segmentation results (right).
The combination of global optimization methods and contour based techniques is another
point that has been addressed by our team. In this case, we proposed a segmentation
approach that applies the topological derivative as a pre-processing step. Formally, the
topological derivative gives the sensitivity of a cost functional with respect to small domain
perturbations. The obtained result is used for initializing a deformable model (level set) in
order to get the final result. The Figure 4.2.2 shows a result obtained with this approach for
cell segmentation. The Figure 4.2.2.a shows the obtained boundary after the application of
topological derivative and morphological operators and the Figure 4.2.2.b shows the level set
result.
(a)
(b)
Figure 4.2.2. (a) Level set initialization after application of topological derivative plus
morphological operators. (b) Level set result after 135 interactions.
Once performed the segmentation, in general, some kind of feature extraction must be
considered. For instance, we have addressed the problem of segmentation and feature
extraction for panoramic X-Ray images. These images are very popular as a first tool for
diagnosis in odontological protocols. We proposed a segmentation approach based on
mathematical morphology, quadtree decomposition for mask generation, thresholding, and
snake models. The feature extraction stage is steered by a shape model based on Principal
Component Analysis (PCA). Finally, morphometric data extraction is performed to obtain the
teeth measurements for dentist diagnosis. The Figure 4.2.3 shows the original image, the final
segmentation and the main axes obtained (“r”) for further measurements.
Figure 4.2.3. Original image (left) and final segmentation (right). The line “r” is the main axes
obtained by PCA approach.
45
Besides, we combine segmentation, feature extraction and registration to compare the
accuracy of two imaging acquisition techniques, cone beam computed tomography (CBCT)
and the multislice spiral computed tomography (MSCT), for rapid maxillary expansion (RME)
problem. Specifically, the purpose of this study was to assess the accuracy of CBCT
technique, compared with the MSCT, for image-based linear measurements of midpalatal
suture, after rapid maxillary expansion. In this case, the registration is a fundamental point to
correct distortions and to guarantee that the measurements in each image are taken at
anatomically identical locations.
A two-dimensional object-based image registration process, available in the Matlab
(MathWorks, Natick, MA), was applied that seeks for the best affine transformation that maps
a 2D input image (CBCT) in a reference one (MSCT). This method relies on a sequence of n
pairs of corresponding control points in both images. The control points are automatically
taken from an image processing pipeline composed by the following steps: (1) Image clipping
for selection of the region of interest; (2) Image enhancement through contrast stretching; (3)
Image thresholding; (4) Convex hull calculation (smallest convex set which contains the
foreground pixels); and (5) Extraction of the boundary of the convex hull. This pipeline is
applied for both the reference and the input images. The obtained polygonal curves are
processed by a matching technique that gives the desired control points pairs. Figure 4.2.4
pictures a result obtained with this methodology.
(a)
(b)
(c)
Figure 4.2.4. Result of image registration. (a) MSCT reference image (b) CBCT original image
(c) CBCT registered image
4.2.2) Classification and pattern recognition
Once performed the segmentation and feature extraction it follows the knowledge discovery
and its application for diseases detection and diagnosis. In the area of classification we
focused on Linear Discriminate Analysis (LDA) and Support Vector Machines (SVM). The
primary purpose of LDA is to separate samples of distinct groups by maximising their
between-class separability while minimising their within-class variability. Its main objective is
to find a projection matrix that maximizes the Fisher’s criterion.
However, in limited sample and high dimensional problems the standard LDA cannot be used
46
to perform the separating task. To avoid both critical issues, we have used a maximum
uncertainty LDA-based approach (MLDA) that considers the issue of stability by introducing a
multiple of the identity matrix.
We proposed a new ranking method for Principal Component Analysis (PCA). Instead of
sorting the principal components in decreasing order of the corresponding eigenvalues, we
propose the idea of using the discriminant weights given by separating hyperplanes to select
among the principal components the most discriminant ones.
Besides, kernel versions for non-linear separating hypersurfaces was also considered. These
methods were tested for recognition and reconstruction of face images in order to validate the
methodology. We have obtained promising results that will encourage further application for
classification in medical image databases.
In this subject, the extraction of feature vectors is crucial. There exist several algorithms to
compute such vectors, each of which with their own set of parameters, reflecting different
properties of the image dataset under investigation. Therefore, the choice of the set of
features that provides the highest classification rates or the most efficient retrieval poses as a
challenging accomplishment. A common approach consists of selecting a pre-labeled set of
images, followed by the definition of parameters for the feature extraction algorithm, which is
frequently a tedious and laborious task. Then, feature vectors are computed and classification
attained. We propose a new approach to this feature selection problem, by means of visual
analysis of feature spaces using point placement or projection techniques.
Projection techniques work by mapping high-dimensional data into a lower dimensional visual
space, whilst retaining, up to an extent, the distance relationships defined in the original data
space. Initially, a set of features is extracted. Then, the computed feature vectors are
visualized in a 2D representation that reveals the similarity relationships between the images
under analysis. This visual representation is used to determine if this set of features
successfully represents the image dataset, according to an expert point of view. If the
similarity relationships match with what is expected by the expert (similar images are closely
placed and the dissimilar ones are positioned far apart), the set of features properly
represents the dataset and can be considered for other tasks, such as classification.
Otherwise, the parameters can be changed or another extraction algorithm can be employed,
producing a new set of features that can then be visually explored to check if it properly
represents the dataset.
We adopted well-known texture analysis methods to generate features, such as Haralick cooccurrence matrix and Gabor filters. We also test features extract using the bag-of-visual
features (BoVF) model, which is capable of capturing not only local, but mostly global
descriptors from images.
4.2.3) Visualization of 3D datasets and software development
The techniques in scientific visualization can be classified according to the data type
they manage. Scalar fields, vector fields, tensor fields compose the usual range of data types
in this area.
Henceforth, we have methods for scalar fields visualization (isosurface generation,
volume rendering, colormap, etc.), vector fields visualization (field lines generation, particle
47
tracing, topology of vector fields, LIC, among others) and techniques for tensor fields
(topology and hyperstreamlines).
In volume rendering, the visualization model is based on the concept of extracting the
essential content of a 3D data field by virtual rays passing through the field. These rays can
interact with the data according to artificial physical laws designed to enhance structures of
interest inside. These laws can be summarized in a transport equation. Figure 4.2.5 pictures
the basic idea behind the model, where s is a distance over the ray, I is the scalar field to be
visualized,  is the extinction coefficient, and g(s) represents generalized sources
Figure 4.2.5. Volume Rendering: virtual rays passing the field, interacting with data and give
the final image
Isosurface extraction methods, like Marching Cubes, work differently. Given a scalar value c
and a scalar field F, the isosurface S is defined as the set: S(c)={x in R3; F(x)=c}. In this
process, all data cells are first visited to identify cells that intersect the isosurface. Then, the
necessary polygon(s) to represent the portion of the isosurface within the cell is generated
and stored. Up to the end of this process, the obtained set of polygons gives a piecewise
linear representation of the isosurface. In volume rendering, each ray contribution can be
computed independently. The same is true for each portion of an isosurface. Hence, both
these methods can be efficiently implemented in distributed memory machines.
We incorporate volume rendering and isosurface techniques in the PyImageVis system, which
is a software implemented in Python language, that we are developing for image processing
and visualization of 3D images. The graphical user interface follows the MatLab philosophy.
Its focus is to offer a scientific computing system for researchers to test algorithms for image
processing and visualization. Figure 4.2.6 pictures the main software interface and an
application example for isosurface visualization.
48
Figure 4.2.6. PyImageVis interface and application example.
4.2.4) Distributed visualization and management systems
Grid Computing provides transparent access to distributed computing resources such as
processing, network bandwidth and storage capacity. In order to support complex
visualization applications on the grid environments the Query Evaluation Framework (QEF)
was designed.
QEF has been extended to implement querying within a number of different applications,
including isosurface generation and volume rendering, known techniques in the field of
scientific visualization. Application requests take a form of a workflow in which tasks are
represented as algebraic operators and specific data types are enveloped into a common
tuple structure. The implemented system is automatically deployed into schedule grid nodes
and autonomously manages query evaluation according to grid environment conditions.
To operate an organization like the INCT-MACC, we need management systems to share
image processing databases and resources. Therefore, we are developing a medical and
biological imaging management system that will be used to share images and information
between researchers via a client/server architecture. It will be developed using the DBMS
PostgreSQL and PHP to create the web interface.
4.2.5) Digital prosthesis design
Modelling and visualization systems has revolutionized many scientific disciplines by
providing novel methods to visualize complex data structures and by offering the means to
manipulate this data in real-time. With these systems the surgeons are able to: navigate
through the anatomy, practice both established and new procedures, learn how to use new
surgical tools, and assess their progress.
One example of such application is in digital prosthesis design. For instance, the restoration
and recovery of a defective skull can be performed through operative techniques to implant a
customized prosthesis. We present a framework for skull prosthesis modeling. Firstly, we take
the computed tomography (CT) of the skull and perform bone segmentation by thresholding.
49
The Figure 4.2.7 shows the segmentation result and the corresponding skeleton, obtained by
morphological operations (red curve). This curve is used to extract boundary conditions for
the next steps.
Figure 4.2.7. Bone segmentation and skeleton in red.
The obtained binary volume is processed, frame-by-frame, to get the boundary conditions
(Figure 4.2.8) to initialize a 2D deformable model that generates the prosthesis skeleton in
each CT frame.
Figure 4.2.8. Initial conditions for deformable model.
Then, we dilate each patch (Figure 4.2.9) to complete the prosthesis volume, which is the
input for a marching cubes technique that generates the digital model of the target. In the
experimental results we demonstrate the advantages of our technique and compare it with a
related one.
Figure 4.2.9. Reconstruction for a CT slice.
4.2.6) Automated segmentation techniques with applications for Alzheimer’s disease
This point considers the problem of studying and practical testing Study and practical tests of
automated segmentation techniques for anatomical structures, considering their use for
computer-aided diagnosis applications mainly for Alzheimer’s Disease.
50
Alzheimer’s disease (AD) is a progressive neurological disease of the brain that leads to
irreversible loss of neurons finally leading to dementia. The disease is associated with the
pathological accumulation of amyloid plaques and neurofibrillary tangles in the brain, and first
affects memory systems, progressing to language, executive function and behavior
compromise. AD is the most common cause of dementia among elderly, ranging from 49.9%
to 84.5% in Latin America. Dementia prevalence reached 6.8% in a community of São Paulo
city, Brazil, considering subjects aged 60 years old and over, mirroring the rates of developed
countries. Dementia is becoming a major public health problem in Latin America as in many
countries that are undergoing a process of demographic transition in which the elderly
represent a significant proportion of the total population, according to last reports published by
Brazilian Institute of Geography and Statistics (IBGE). In 2020, subjects aged 60 years old
and over will be beyond 32 million, representing more than 13.8% of the Brazilian population.
The hippocampus and the amygdala are among the first structures affected in AD. The
hippocampus volumetric measurements using magnetic resonance images (MRI) provide a
sensitive biomarker for AD. Studies have proposed volumetric measures of the hippocampus
to differentiate normal aging from AD, and from mild cognitive impairment (MCI). However, the
hippocampus volumetric measurement mostly relies on highly time-consuming manual
segmentations, which is a subjective process and is not feasible in clinical routine. An expert
may require 30 minutes to trace a single structure such as the hippocampus. Fortunately,
automatic methods using computational algorithms have provided consistent results
comparable to manual segmentation.
In our work, three fully automatic methods implemented on public-domain applications are
compared: FIRST/FSL, IBASPM and Freesurfer. FIRST/FSL (http://www.fmrib.ox.ac.uk/fsl) is
a model-based segmentation and registration tool applied to get subcortical brain
segmentation using Bayesian shape and appearance models, developed by FMRIB group
from Oxford University. IBASPM (http://www.thomaskoenig.ch/Lester/ibaspm.htm) is a toolbox
for brain segmentation of structural MRI, developed by Cuban-Neuroscience Center.
Freesurfer (http://surfer.nmr.mgh.harvard.edu/) is a set of software tools for the study of
cortical and subcortical anatomy developed by members of Athinoula A. Martinos Center for
Biomedical Imaging. Our first goals were: (1) analyze and synthesize the medical image
analysis pipeline for each of the three methods previously cited; (2) compare the
hippocampus and amygdala volumes obtained by each method, quantifying the volume
agreement among them; (3) discuss the quality of the volumetric measurements obtained with
each method.
The three methods here compared are well-known public-domain medical image analyzing
tools, applied to segment anatomical brain structures automatically, using tri-dimensional
images acquisition by magnetic resonance scanners. We synthesized the image analysis
pipeline from each medical image analyzing package. In order to facilitate comparison, we
defined high-level image analysis processing stages and assigned it to each pipeline stage
from a particular package. The following items show the stages and their corresponding
description. Figures 4.2.10-4.2.12 graphically show the image analysis pipeline for each
package studied:
1. Preprocessing. The preprocessing involves preparing the images for feature selection
and correspondence. Thus, image enhancements are performed, such as image
contrast improvement, image noise removal, local field inhomogeneity correction and
51
slight head movement correction. Also, the preprocessing phase can perform image
reorientation. The image orientation parameters are typically registered in the image
header. However, when using different image file formats, one should make sure the
image orientation read by the image application is correct. If not, it is necessary to
correct the image orientation, correcting the image header or resampling the whole
image.
2. Registration. Image registration is the process of aligning images so that
corresponding features can easily be related. The term is also used to mean aligning
images with a computer model or aligning features in an image with locations in
physical space or standard coordinate system. In neuroimage, there are two wellknown and public-domain sets of standard coordinate systems: (1) Talairach
coordinate system and (2) MNI coordinate system.
3. Segmentation. Image segmentation methods are grouped into two classes: (1) in
voxel-based methods, the anatomical brain structure is segmented using their voxel
features. Some voxel features are used to represent the structure of interest: signal
intensity, x, y and z positions, neighborhood-based features using gradient filters,
mean filters, standard deviation filters and Haar filters, applying different sizes. The tridimensional positions can be determined using stereotaxic coordinates after spatial
normalization to the standard spaces, or using spherical coordinates. (2) In the vertexbased methods, the anatomical structure of interest is represented by its contour. A
contour is modeled by a series of connected vertices along the edge of a structure. A
deformable contour is an iterative procedure that aims at finding out the contour of the
structure of interest, such that vertices take small steps towards the anatomical
structure boundary at each of iteration. Each vertex is driven by the image intensities,
typically being attracted to voxels with large intensities gradients. Physical parameters
can be used to constraint the deformable contour and improve robustness, such as
tension, rigidity and curvature.
4. Labeling. Image labeling is the process of assigning a label to every voxel in an image
such that voxels with the same label share certain visual characteristics and belong to
the same anatomical structure.
Figure 4.2.10. Freesurfer image analysis pipeline.
Figure 4.2.11. FSL image analysis pipeline.
52
Figure 4.2.12. IBASPM image analysis pipeline
Figure 4.2.13 shows the left-right amygdala volumes obtained by the three image
segmentation packages. The volumes are displayed on 2D coronal images. We observed that
Freesurfer has included part of superior region of hippocampus and CSF (cerebrospinal fluid),
as indicated in Figure 4.2.13.a. Also, it has included the WM (white matter). The IBASPM has
not included a number of regions belonging to amygdala, as shown in Figure 4.2.13.b. This
explains why amygdala volumes from IBASPM have presented smaller values. Regarding to
FSL, part of neighbor structures have been included in amygdala region, including the WM
and CSF, as shown in Figure 4.2.13.c.
Figure 4.2.14 shows the left-right hippocampus volumes obtained by the three computational
methods. Regarding Freesurfer, we observed that when the sulcus between the hippocampus
and the parahippocampus gyrus is not clearly defined, the upper cortex of the
parahippocampus gyrus was included in the hippocampus volume segmentation. Probably,
this problem do not occur in the elderly population as this sulcus will be more pronounced
between these two structures, allowing better boundaries discrimination. The dentate gyrus,
subiculum and upper area of parahippocampal gyrus (entorhinal cortex) appear to have been
included in the hippocampus volume, considering the images belonging to both groups
studied, as shown by the red arrows in (a3), Figure 4.2.12. Freesurfer also included the
inferior cornu of the lateral ventricle. In the most posterior portion (tail), Freesurfer included
part of the fornix, as shown by the red arrow in (a1), Figure 4.2.14. The hippocampus region
of interest criteria of Freesurfer were not clearly present in the program technical manual. In
FSL, the limits of the hippocampus have been expanded, including the adjacent white matter
area and other neighboring structures, providing volumes significantly inflated. FSL also
included the fornix and entorhinal cortices. We noticed that the hippocampal volumes
obtained from IBASPM have shown discontinuous areas, as shown in (b3) and (b4), Figure
4.2.14. The errors in hippocampal volume from IBASPM arose from inaccurate image
registration and use of the MNI single-subject for the manually predefined region of interest of
the hippocampus.
53
Figure 4.2.13. Images slices showing the left (L) and right (R) amygdala slices of a normal
subject: 1-3 indicate the most-posterior and most-anterior slices, respectively; (a), (b) and (c)
mean the results of Freesurfer (yellow), IBASPM (red) and FSL (green), respectively.
Figure 4.2.14. Images slices showing the left (L) and right (R) hippocampus of a normal
subject: 1-4 indicate the most-posterior and most-anterior slices, respectively; (a), (b) and (c)
mean the results from Freesurfer (yellow), IBASPM (red) and FSL (green), respectively.
54
4.2.7) Deriving fuzzy rules based on rough sets with the aim of pattern classification
Fuzzy rule-based systems have been successfully used in the solution of various control
problems. A fuzzy classifier takes into account the uncertainty inherent to the major real
classification problems. The fuzzy rules, for these systems, can be derived from human
experts as linguistic if-then rules. However, in many applications the knowledge required to
derive these rules may not be easily available, and humans may be unable to extract the
knowledge out of a massive amount of numerical data. Recently, several works have been
proposed to automatically generate fuzzy rules from numerical data. Considerable efforts
have been concentrated in the use of GA to obtain fuzzy rules automatically and to tune fuzzy
partitions of the input space. Genetic algorithms are robust due to the global searching,
however involve intensive computation and the results are strongly dependent on the fitness
functions and the GA parameters such as number of generations, population size, crossover
and mutation rates, tournament size, crossover type and the stop criterion.
The use of rough set to support fuzzy rule-based systems is still a challenge. Few works have
been proposed to address the classification problem based on rules using rough sets,
however, the rough set theory is not used directly to generate the fuzzy rules. These methods
do not take into account the ambiguity of the data, and consequently the lack of evidence (or
ignorance) in classifying a given pattern into one of existing classes. This work overcomes
this problem. We propose a novel method to generate automatically fuzzy rules based on
rough set theory. The universe of the knowledge of a given application is grouped into
different combinations, with different sizes, using a data mining algorithm. Each one of these
groups is referred as granule of the knowledge. The proposed method analyzes each granule
of the knowledge, in order to obtain concise rules (reduced number of antecedent terms) with
high covered. Due to the lack of information or the uncertainty inherent to the application, two
objects can present similar features but belong to different classes. In the face of ambiguous
information, the proposed fuzzy classification system is able to distinguish between evidence
and ignorance about a given pattern.
The proposed method has been tested using four public datasets: Diagnostic Wisconsin
Breast Cancer Database – WDBC (http://archive.ics.uci.edu/ml/datasets/) with 569 samples,
10 attributes and 2 classes; and Prognostic Wisconsin Breast Cancer Database - WPBC with
669 samples, 10 attributes and 2 classes. To evaluate the results of the classier for each
dataset, we have used 10-fold cross validation. The area under the averaged ROC curve was
computed (AUC). The AUC computed for each dataset are 0.99, 0.94 respectively. The results
are superior, when compared to related works using the same datasets, see Tables 4.2.1 and
4.2.2. Further works are in progress to test our proposal for larger medical image databases.
Table 4.2.1. Comparison of results using WDBC dataset.
WDBC Data set with 10 features
Method
The proposed method
SVM
C4.5
Linear discriminant
AUC
0.99
0.97
0.92
0.93
55
Table 4.2.2. Comparison of results using WPBC dataset.
WPBC Data set with 10 features
Method
The proposed method
KBSSVM
Neural Network
AUC
0.94
0.68
0.93
4.2.8) Quantification of left ventricle movement (continuation to 4.1.6)
Here we complement the activities presented in Section 4.1.6) regarding the quantification of
the movement of the left ventricle.
The description of the left ventricular movement is based on a 3D velocity vector field, where
each vector represents the voxel motion. The proposed representation must then be able to
present the three pieces of information that comprises each velocity vector: intensity, direction
and orientation. In order to evaluate the movement of the LV three movement directions were
defined, each with two possible orientations: (1) radial movement is described as a
contraction towards the center of the LV during systole and as an expansion from the center
during diastole; (2) horizontal rotation represents the clockwise and counter-clockwise
movement of the cardiac walls; (3) vertical rotation represents the movement towards the
base (upwards) during systole and towards the apex during diastole (downwards).
A desired feature of the representation is that all information concerning a movement direction
is presented in a single image, meaning the orientation and the intensity of a given velocity
direction. Aiming at this goal, a color coding scheme is defined as the following: for each
direction, the color assigned to a voxel indicates the orientation of the movement, being either
positive or negative, and the strength of the color indicates the intensity of the velocity vector
in this direction. Positive and negative orientations for each movement direction are defined
as follows: (1) radial: expansion is positive; contraction is negative; (2) horizontal rotation:
clockwise rotation is negative, and counter-clockwise rotation is positive; (3) vertical rotation:
downwards motion is positive, and upwards rotation is negative. A discrete lookup table is
used to map velocity intensity, in which no motion is depicted as white, positive values are
depicted as blue and negative values are depicted as red.
The bull’s eye projection is a 2D map of a 3D left ventricle, whose display is a polar projection
with apex in the center, mid-cavity in the middle, and the base in the periphery. The standard
representation for bull’s eye set up by the American Heart Association (AHA) is independent
of size and orientation of the heart. It divides the left ventricle in three thick slices with a
distribution of 35%, 35%, and 30% for the basal, mid -cavity, and apical thirds of the left
ventricle. These slices are further divided into segments, giving a total of 17 segments as it is
shown in Figure 4.2.17.a The 17 segments are then arranged in a polar projection as shown
in Figure 4.2.17.b.
56
Figure 4.2.17. (a) Segmentation of the left ventricle in 17 segments; (b) The segments
displayed on a circumferential polar plot.
The method was applied to 3D gated-SPECT (99mTc-MIBI) images obtained from normal
subjects and patients with intraventricular dyssynchrony that were diagnosed as having
severe Idiopathic Heart Failure. These images are gated obtained in Nuclear Medicine. In
Figure 4.2.18 we present the velocity images of a normal subject using the color coding
scheme for the three movement directions in a slice from the mid-cavity portion.
To observe all sequences of movements of the heart throughout the cardiac cycle, however,
one would have to look at all slices from de 16 gated-SPECT volumes, in a cumbersome and
time consuming task. The functional standardized bull’s eye allows the visualization of the
velocity information in a chosen direction for a whole volume. In Figure 4.2.19 we present the
velocity bull’s eyes of a normal subject – that follow the same patterns of the other normal
subjects - for the three defined directions, in diastole and systole. The information presented
is the mean velocity intensity from the voxels belonging to each segment.
57
Figure 4.2.18. Velocity images of a normal subject in a slice from the mid-cavity portion of the
LV. Line 1 presents diastolic images and line 2 presents systolic images. Column (a) depicts
the images of the radial component, column (b) the images of the horizontal rotation
component and column (c) the images of the vertical component. The color scale is shown at
the top right of the figure.
Figure 4.2.19. Velocity polar maps of a normal subject. Line 1 depicts the maps at the
maximum relaxation frame in diastole and line 2 depicts the maps at the maximum contraction
frame in systole. Column (a) depicts the maps of the radial component, column (b) the maps
of the horizontal rotation component and column (c) the maps of the vertical component. The
information presented in each segment is the mean velocity intensity from the voxels
belonging to it.
Experiments for testing the methodology were performed in phantoms showing the potential
of the technique.
4.3) Area 3: Collaborative virtual environments
4.3.1) Immersive & collaborative virtual environments for medical training
One of our main goals was to exploit Collaborative Virtual Environments (CVE) as an aid to
telemedicine. Such goal can be achieved through the exploitation of immersive (as well as
58
non-immersive) CVEs for medical training, monitoring, surgical planning, medical case
discussion as well as eventual medical telemanipulation. In this front we have implemented a
few prototypes.
ACAmPE – Collaborative Environment for the Oil Industry Safety
This prototype aimed at training users who would be aboard an offshore oil platform. The
training included safety drills, such as finding escape routes in case of emergencies. Figure
4.3.1 shows the designed interface. ACAmPE was developed in Java/Java3D and
implemented a generic Virtual World partitioning, which allowed the system to handle very
large models.
Figure 4.3.1. ACAmPE Interface.
ACOnTECe – Collaborative Environment for Surgical Training
ACOnTECe was more directly a medical application. It was both a 3D Atlas of the human
heart and a first sketch of a surgical simulator. The interface (Figure 4.3.2) displays some
surgical equipment (saw, scalper, etc.), which could be used to perform some pre-configured
tasks. ACOnTECe was also developed in Java/Java3D. The tool used transparency to allow
better access to the geometry of the 3D structure.
59
Figure 4.3.2. ACOnTECe Interface.
In the surgical module (Figure 4.3.3), one can interact with the system using standard devices
such as mouse/keyboard. A later implementation also handled a haptic device (Phantom
Omni), but used just as an input device.
Figure 4.3.3. ACOnTECe with some steps of a surgery.
EnCIMA Engine – An Engine for Collaborative and Immersive Multimedia Applications
More recently we developed a full graphic engine to support the development of immersive
virtual environments. A graphics engine is a key component in a VR application, being
responsible for performing important tasks such as accessing input devices, resource
management, updating components of the virtual environment, rendering the 3D scene and
presenting the final result through display devices. The EnCIMA’s design follows an
architecture that divides its functionality in three layers, application, core, and sub-systems, as
shown in Figure 4.3.4. These layers, in turn, are organized into modules with a strict
hierarchical dependency.
60
Figure 4.3.4. EnCIMA engine architecture.
A. The sub-systems layer
The sub-systems layer is at the lowest level and is composed of several modules that offer
specific services to the core which, in turn, transforms them in high-level actions for the
application. Therefore, every module of this layer is characterized by the application domain it
is target for, the tasks assigned to the module, and the technology employed to execute these
tasks.
Consider, for instance, the I/O Drivers module, which is responsible for recognizing the
interaction devices plugged into the application. The I/O Drivers module’s primary attribution
is to receive user action entered through devices like mouse, keyboard, joystick, data gloves,
and position tracking systems. For that purpose, the I/O Drivers module must provide
functions that recognize buttons being pushed, requests for updating cursor position,
understand tracker orientation and positioning in terms of the VE’s coordinate system, etc.
EnCIMA renders using the OpenGL API, whereas the audio is handled by DirectX. The
graphics user interface, interaction devices driver, and network connection are all handled by
native Windows API.
B. The core layer
The core is responsible for providing the link between the application layer and the engine’s
available sub-systems. Johnston defines the core as the engine’s central component or
server, responsible for invoking the appropriate functions with the right parameters in
response to events generated by the user or the environment. Therefore, all modules from the
sub-systems layer must register with the core, so that the core is able to initiate each
registered module and coordinate the interaction and data exchange among the engine’s
components.
Similarly to the sub-systems layer, the core is also organized into several modules, called
managers, each of which having specific responsibilities that are fundamental for the engine’s
proper functioning. These responsibilities may include access to the local file system,
61
resource management, mathematics core library, activity log, and the specification of the
manipulation interface between objects from the VE and the associated devices.
The EnCIMA’s core layer is composed by a set of manager modules (c.f. Figure 4.3.4),
defined as C++ classes. Among the manager modules in the core, we highlight the graphics,
input, sound, network, sound, and collision detection. The graphics manager is responsible
for the allocation of graphics resources, the loading of several image formats, texturing,
terrain generation, particle systems effects, scene graph management, and all the graphics
related features. The graphics manager also provides an automated garbage collection
system that shares graphics resources, and uses reference-counted memory to avoid
unnecessary duplication of resources and memory leaks.
The input manager communicates with every input device supported by the engine. This
module detects, initiates, and uniquely identifies all input devices plugged into the application,
making them available to other managers through an abstract device. This module is
responsible for recognizing a variety of VR devices such as 3D mice, data gloves, tracking
systems, joysticks (with or without force feedback), and phantom-like haptic devices. The
Code 1 snippet shows the usage of the input manager to initiate and obtain the angles from
one of the 3D positioning tracker’s sensors, whereas in Figure 4.3.5 we display some of the
devices supported by this manager.
Figure 4.3.5: Some devices supported by the EnCIMA engine.
The sound manager coordinates the loading of 2D and 3D sounds, allowing the application to
set up various sound parameters such as volume, 3D position, area of influence, and
62
attenuation. The net manager is responsible for establishing network connections between
servers and clients, for collaborative applications. The data exchange follows a multicastbased client-server model. This means that every server is responsible for both managing
groups of client applications and delivering modifications to all the participants of a given
group.
The collision manager avoids virtual objects to penetrate one another. A pre-stage or broad
phase in collision detection is to identify the moment that objects get close enough to be
considered for collision tests or narrow phase. For the broad phase it is possible to choose
among methods that focus on the space occupation, such as regular grids, or BSPTree; or a
new method (in development) that focuses on the virtual object itself, called area of interest.
The later method employs the same principles found in the area of Collaborative Virtual
Environment to reduce message exchange among participants to reduce the number of
collision tests among dynamic objects. The main feature of the collision manager is the use of
a dedicated physics processing unit to accelerate the narrow phase of the collision detection
process.
C. The application layer
The application corresponds to the target software that has been built upon the core’s
functionality. It is the application designer’s responsibility to specify how the VE is supposed
to look like, load and position all 3D models, set up sensors to trigger the assigned animation
and visual effects, as well as to define the VE’s response based on either the engine’s state
machine or user interaction.
The EnCIMA engine was designed with the purpose of enabling the developers to get their
application up and running as quick and simple as possible. The engine offers an easy to use
object-oriented interface that reduces the effort required to render 3D scenes, in such a high
level that the applicationbecomes independent of third-party 3D graphics rendering API (e.g.
Direct3D or OpenGL). For that reason, the developer does not need to have previous or
specific knowledge on how to program a given API nor the interaction with special
input/output devices.
Through this layer an application has access to all the engine’s functionality and resources.
The Scene class, shown on the top of Figure 4.3.4, is the realization of this layer. This class
contains a high-level Manager object that has a reference to every manager located in the
core. This centric approach facilitated the implementation of the Scene class where several
high-level functions are available to the developer.
Using the EnCIMA engine we have developed a few applications. The first of which was a fullscale 3D Atlas of the human body.
AVIDHa - 3D Haptic Anatomy Atlas
The application, called AVIDHa ("Atlas Virtual Interativo Distribuído Háaptico" or Distributed
Virtual Human Atlas with Haptic Sense), is a 3D human body atlas for the purpose of anatomy
study. AVIDHa allows students to interactively explore the several human body systems
through the senses of touch and stereoscopic vision. The human body systems are available
as high definition 3D models with photo-realistic textures, as shown in Figure 4.3.6.
The application allows the anatomy student to fly through and inside the human body. The
flight and exploration modes are done with either 3D mouse or joypad. The student may also
63
choose to investigate each system separately, change organ’s opacity to exam internal parts,
capture screenshots for later examination, manipulate clipping planes to explore the inner
parts of a given system, or even use a haptic device, such as the Sensable’s Phantom
Omini, to feel the organ’s density and contours.
Figure 4.3.6. AVIDHa Main Interface.
The AVIDHa may also run as a distributed collaborative application, allowing users
geographically apart to each other to interact through the EnCIMA’s network support. In this
case a mediator, possibly an expert in anatomy, may drive the simulation and share his/her
knowledge with other participants. When started, the application needs to load all 3D body
systems. After that the rendering starts, which can either be monoscopic, displayed on a
typical desktop monitor, or stereoscopic, displayed on a multi-screen projection or CAVElike
display.
64
Figure 4.3.7. AVIDHa in a CAVE setup.
In terms of performance for the AVIDHa application, the engine delivered a refresh rate of
approximately 30 frames per second, for 3D models with 5.3 million polygons and 87.8
MBytes of texture, running on a Intel Pentium D (3.0 GHz and 2.0 GBytes of RAM) with a
NVIDIA GeForce 8800 GTX. Figure 4.3.7 shows AVIDHa working in a CAVE environment.
Low-cost CAVE
As shown in Figure 4.3.7, we have developed a low-cost CAVE at the LNCC. Such setup is
controlled by a cluster of 4 computers (each one controlling each one of the four walls), we
use passive circular polarizing stereoscopic projection, with two off-the-shelf NEC LT245 DLP
projectors for each wall. The CAVE structure is built out of PVC pipes. The hardware is
controlled by InstantReality and the performance has been quite acceptable. Figure 4.3.8
shows the CAVE setup seen from above (a) and in detail (b), whilst Figure 4.3.9 shows the
CAVE in use.
65
(a)
(b)
Figure 4.3.8. CAVE Setup. (a) from above, (b) 3D model of the setup.
66
Figure 4.3.9. CAVE displaying blood vessels with flow speed in colours.
Gesture recognition
We also had some development on gesture recognition through stochastic equations. Such
technique allows one to control a 3D environment through intuitive gestures.
Ongoing work
At this stage we are designing a portable VR system, which will allow easier deployment of
3D applications. Without such portable system, one needs to come to a facility that has the
setup (a CAVE for instance), which is not possible in most medical cases. With the portable
system one can carry the setup to whenever it would be necessary.
4.3.2) Multi-sensorial virtual environments for medical data visualization
We investigated the best way to harness haptic rendering to support the modelling process of
craniofacial prostheses. The aim is to design and develop a virtual environment in which
specialized users would be able to virtually sculpture prostheses. The data representing the
prostheses come from a three dimensional reconstruction process based on 3D imaging of
patients who need to receive a prosthesis implant. The 3D reconstruction processes,
however, is part of another project that integrates the INCT-MACC. The specific goals of this
work are to investigate the best way to represent a craniofacial prosthesis so that it can
support sculpture operations. Furthermore, we will generate a prototype virtual environment in
which the sculpting process will take place, thereby providing means to verify whether the
resulting prostheses fits into the 3D model of the patient’s face.
The main goals of medical imaging is to utilize images from magnetic resonance imaging
(MRI), Computed Tomography (CT), X-ray, and ultrasound to aid in the diagnosis and
treatment of disease, to offer intraoperative support in medical procedures, and to foster
educational applications for learning the human body.
The treatment of infarct of myocardial, stenosis, cerebral vascular accident, aneurism,
embolism pulmonary, bad formation of vessels and other vascular diseases use the
angiogram as examination of evaluation. An Angiogram utilizes X-ray imaging to visualize the
structure vascular patient. A contrast agent is injected in a vein or artery using a catheter, to
highlight those structures and facilitate the observation by the doctor.
67
Nonetheless, in some particular situations certain vascular structures are difficult to be seen.
When the vase it is very fine, noisy, or its color it is very similar of those of the neighboring
tissues, the anatomic shape might be fused with its surroundings. This situation may hinder
the diagnosis process, even for the experienced eyes.
The use of computer tools may assist the process of medical imaging, helping doctors to
avoid errors during the diagnosis stage. Therefore, the goal of this work is to help the
segmentation of blood vessels by initially applying thresholding, followed by skeletonization
(or thinning) to create centerlines that will used as reference to a automatic verification
process.
That verification process tries to eliminate regions that are not vessels by comparing its
diameter to a acceptance interval. The final objective is to run the entire process on GPU
(Graphics Processing Unit), so as to have a considerable performance gain. Figure 4.3.10
shows the expected results.
Figure 4.3.10. Example of blood vessel segmentation using threshold and thinning.
Ongoing work
We aim at designing and implementing computational modules to provide the concept of
remote rendering of graphics applications based on OpenGL without the need to rewrite or
modify the original application. The principle is to use the high speed network to send and
receive the results of graphic processing as a stream of video, thereby compensating the low
graphics processing power of a thin client, such as a mobile phone, simple terminal, or pda.
The proposed service will consist of software to be installed on the client, which intercepts
requests for graphics operations, will send such requests to a server previously configured,
then the server will receive the result and, finally, return to the requesting application, which
can then present the result to the user.
Therefore, it will be possible to separate the user interface of the graphics processing from
the core application. The user interface runs at the client side, while the graphics processing
will be referred to a more appropriate server and the result of rendering is returned back to
the user, by means of the local graphics user interface. Hopefully the user will not notice that
the application relies on a remote server.
One of the main goals of this project is to enable the visualization of data is done
68
collaboratively and remotely, and is therefore essential that there is an infrastructure to
support remote rendering.
We are also investigating the best way to harness haptic rendering to support the modeling
process of craniofacial prostheses. The aim is to design and develop a virtual environment in
which specialized users would be able to virtually sculpture prostheses. The data representing
the prostheses come from a three dimensional reconstruction process based on 3D imaging
of patients who need to receive a prosthesis implant. The 3D reconstruction processes,
however, is part of another project that integrates the INCT-MACC. The specific goal of this
effort is to classify the actions supported by the Phantom Omini haptic device and map them
to traditional sculpture operations.
Another effort consists in combining research from psychology, neurology, and physiotherapy
and computer science. The objective is to provide computational support to the development
of a pilot study that aims at investigating the effectiveness of a novel cognitive therapy that
combines training of work memory and motor imagery to foster the rehabilitation of both
cognitive and motor functions for patients recovering from a severe stroke.
The goal is to create a virtual environment in which the participant will find himself or herself
immersed through the use of head mounted display (HMD). The head and arm movements
will be tracked, so that such movements will be immediately mapped to equivalent
movements in the virtual world. The system will support three modes of interaction: (a) by
directly moving the patient’s arm attached to a robotic arm (ARMEO, hocoma.com); (b)
through motor imagery captured via a brain machine interface (BMI), or; (c) through a haptic
device (Phantom Omni) positioned in the line of sight of the patient. The latter mode will allow
the participant to interact with three-dimensional objects (3D) through virtual touch. We hope
that the virtual environment will create the illusion of the existence of palpable 3D virtual
objects located just above the support bench, thereby encouraging a high degree of presence
and engagement with system.
4.3.3) New methodologies for haptic use in medical collaborative virtual environments
The benefits of haptic systems can be observed in situations in which the comprehension of a
problem depends on or are complemented by tactile sensations of objects or scenes. Touch
and manual dexterity to perform procedures are also factors that have promoted the
development of techniques and processes involving haptics. The present project is inserted in
the context of collaborative virtual environments and systems for distance learning of the
project INCT-MACC.
The work proposed by LabTEVE intends to research new methodologies to incorporate haptic
devices in collaborative systems for medical activities related to collaborative and distance
learning. The main idea is to evaluate the benefits related to haptics in activities dependent on
touch, as exchange of medical opinions and learning.
Specific objectives are:
•
•
•
Investigate performance of haptic applications in network when an user guides remote
devices;
Investigate performance of monitoring related to haptic applications;
Investigate performance of haptic systems when users share a same haptic environment;
69
•
•
Use CyberMed, recently developed by the research group of LabTEVE to evaluate the use
of haptic systems in multi-modal collaborative applications, it means, activities that
integrate visualization, interactive deformation, stereoscopy and other features present in
a virtual reality system;
Develop collaborative haptic applications for tests of teaching using medical simulators.
This work is inserted in INCT-MACC contexts of:
1. Modeling and simulation of surgical procedures
2. Development of collaborative virtual environments for virtual reality, augmented reality and
telemanipulation in medicine for simulation
3. Development of virtual and augmented reality collaborative environments, including
telemanipulation, for training in medicine, human resources qualification and surgical
planning
4. Distance learning and video-conference for medicine
a. Advanced computer environment with open access and systems for distance
learning
b. Multimedia communication support for medical video-conference
5. Distributed cyber-environments with high performance for medical simulation and is
integrated in the contributions related to Collaborative Virtual Environments (CVE),
Distance Learning (DL) and Medical Simulation (MS).
The first stage of the project included the development of application to observe integration
forms of haptic devices in real-time environments, particularly complex virtual reality
environments that explore multiple senses. In this subject, two Master degree candidates and
one graduation student started their activities in the second semester of 2008 when the
project was submitted to CNPq. In parallel, the LabTEVE team analyzed frameworks
dedicated to the development of medical applications based on virtual reality.
Collaboration support in frameworks
The main problems related by Haptic Collaborative Systems refer to network speed, jam and
latency. Particularly, these problems are experienced when a force is applied more then few
seconds. The high level of interaction samples associated to haptic devices is one of the
reasons that compromises network speed when they are inserted in collaborative contexts.
Additionally, synchronization is necessary to provide the sense of presence in collaborative
activities performed in virtual environments.
In this context were analyzed the several frameworks available in the scientific literature in
order to identify their main features and their support to the development of haptic
collaborative activities for medical purposes. Thus, it was verified the following frameworks:
ViMeT [Oliveira el al., 2006], CyberMed [Machado et al., 2009], GipSi [Goktekin et al., 2004],
Spring [Montgomery et al., 2002] and SOFA [Allard et al., 2007]. This study allowed verifying
that, besides some of them provide support to haptic collaboration, only a single method was
available in those cases. This fact limits the possibilities of collaboration.
Development of a collaboration module
The design of a module for the CyberMed system was conceived to make available new
features for the framework. CyberMed was chosen to this integration due to its set of features
and, particularly, because is the only framework that provide support to assessment based on
70
statistical models for users’ interactions. This framework is stable and has been developed
and expanded since 2004. Nowadays, it offers: support to interaction through tracking
[Carvalho Jr et al., 2009], haptic devices, mouse and keyboard; support to collision detection
and interactive deformation; support to four different types of visualization, including
stereoscopic viewing; loading of textures and 3D models; and support to assessment of user
interactions [Santos et al., 2010].
In order to allow different types of collaboration, CybCollaboration architecture supports
several types of collaborative activities that could be chosen by the programmer. However, its
development demanded the modification and update of some modules of CyberMed. One of
them was related to the Interaction module in order to allow more than one user share the
same environment. It means that more than one interator can be present in the same scene.
Another class was modeled to listen events from all interators and update the scene.
Particularly it was used coupled to the network module [Pereira e Machado, 2008]
responsible by communication among several applications.
CybCollaboration was implemented to support tasks as tutoring (1 to n), shared manipulation
(n to n). Figure 4.3.11 presents the new architecture of CyberMed after the inclusion of the
collaborative module. This complete implementation is available at the CyberMed 2.0 version.
Figure 4.3.11. New CyberMed 2.0 Architecture.
Ongoing work
According to the activities expected for these 18 months, the following goals were reached:
1. Research and development of uses of haptics in multi-user environments: 30% concluded;
2. Research and integration of haptics in local, remote and shared applications: 40%
concluded;
3. Integration of haptics in simulators for training purposes in medicine: 40% concluded;
4. Capacity of human resources: 60% concluded
Next steps are:
•
•
•
Research of methodologies for asynchronous distribution of digital contents using
haptics (technological innovation).
Research of methodologies for interaction combination in synchronous distribution of
digital contents using haptics.
Capacity of human resources in the area of haptics and collaborative systems.
71
4.3.4) Framework for biopsy exam simulation
The purpose of this work is to build a framework for simulating biopsy exams. In a previous
project (CNPq Process # 304590/2004-5), a prototype framework called ViMeT was proposed
and implemented. It provides classes designed to create virtual environments, with
functionalities to simulate biopsy exams. Currently, through ViMeT applications can be
generated with the following features: dynamic virtual environment, with two modeled objects
- one to represent the human organ and another to represent the medical instrument;
inclusion of the stereoscopy functionality, collision detection and deformation; use of a System
Manager Database to store modeled objects data and generated applications, and interaction
with mouse and keyboard.
The final goal is to allow applications to be generated for real medical training, thus
contributing to medical education in the country.
In the early period of this report the schedule and distribution of activities was defined. In
January 2010 a meeting was held with the whole team at the University of Campinas
(UNICAMP), with the objective of defining the goals of each student, under the responsibility
of their advisors. At this meeting, the objectives of each study were discussed and some
additional partnerships between team members were outlined. At this meeting, it was agreed
that the team would make virtual meetings during the project to monitor the progress. In late
April 2010 was held a videoconference session was held with the presence of all members.
The following scientific activities were defined:










definition and implementation of an interaction module in the ViMeT framework;
a preliminary version of the framework on the Internet (it was not available on the Extranet
INCT-MACC because we still need some improvements). Current address:
http://each.uspnet.usp.br/lapis/ViMeT.htm
definition and implementation of heuristic evaluation for applications generated by the
ViMeT framework;
scene graph visualization for software testing;
reformulation of the ViMeTWizard tool to consider complex objects;
implementation of a virtual atlas to study anatomy;
implementation of the graphical interface for generating virtual atlas;
virtual atlas evaluation with users;
three Masters qualifications presented;
bibliographic review on the themes: methods of deformation to provide realism, images
super resolution, volume rendering acceleration using graphics cards, software testing for
applications based on scene graph, conventional and unconventional interaction, heuristic
evaluation, data structures for optimization of the representation of virtual objects.
Ongoing work
At
this
stage
two
versions
of
the
framework
are
available
at
http://each.uspnet.usp.br/lapis/ViMeT.htm. They constitute a technological innovation in the
health area since it is possible to generate applications in a fast and efficient way. The second
version also make possible to choose which interaction device the user wants include among
keyboard, mouse, dataglove and haptics (Figures 4.3.12 and 4.3.13). Currently the work aims
at including higher realism in the applications by conducting new researches in the
72
interaction, visualization and case studies generation areas.
Figure 4.3.12. Some applications built with the current version of ViMeT.
Figure 4.3.13: Different devices used for interaction with ViMeT applications.
73
4.3.5) Breast 3D anatomic atlas
Modeling
For a general vision of the breast in the feminine anatomy and as it articulates itself with other
structures of the feminine body, it was shaped the external structure of the body. The result is
presented in Figure 4.3.14.
Figure 4.3.14. External anatomy of the breast.
In Figure 4.3.15, it can be observed the mammary glands. Already in Figure 4.3.16, it can be
observed the fats of the breast. Finally, in Figure 4.3.17, it can internally be observed the
mammary glands and fats of the breast to the body. The fat of the breast involves the
mammary glands. However, they do not permeate between themselves. Still it will be
necessary to add more fat to the breast that will be the next stage to the team.
Figure 4.3.15. The Mammary glands.
74
Figure 4.3.16. The fats of the breast.
Figura 4.3.17. Overview of the breast.
Ontological modeling
The construction stage of the ontological model meets in its phase of definition that
understands the definition of the methodology for the ontology construction, the study of
existing ontologies and the target of the ontological model to be developed.
In relation to the definition of the work methodology, the involved team is carrying through
75
studies of the main existing methodologies for creation/adaptation of ontologies, which they
are: Cyc method, Method Uschold and King, Kactus, Sensus, Grüninger and Fox,
Methontology, on-to-knowledge (GÓMEZ-PÉREZ, FERNÁNDEZ-LÓPEZ, CORCHO, 2004).
The studies have as objective to evaluate the capacity of such methodologies in taking care
of the following criteria of choice defined by the team:
-
Largeness of performance: the selected methodology must describe the complete
cycle of creation of an ontological model that understands the planning, creation and
evolution/maintenance of the model.
- Prototype: the methodology must provide the capacity with creation of mini models in
elapsing of all the cycle of development of the ontological model in order to guarantee
its incremental development.
- Adaptability: the methodology must easily be adaptable for the team.
- Documentation: the methodology must provide enough documentation for its
agreement on the part with the integrant ones of the team of the project.
- Reputation: the methodology must possess good reputation and recognition in the
scientific way.
- Independence: the methodology must be independent of tools and standards
proprietors.
Beyond the studies of the methodology used for the creation of the ontological model, the
team is carrying through an analysis of existing ontologies in the domain of description of the
anatomy human being. This activity is necessary, therefore the creation of an ontology
breaking itself of the zero requires an enormous effort and a team highly specialized and
famous reputation in the subject to be shaped. Moreover I reuse, it of existing ontologies
highly is recommended by the scientific community in view of the natural necessity of
evolution of the described concepts already for the experts in the area. Thus, the team of the
project defined the following criteria for the systematic election of the ontology to be adapted:
-
Degree of detailing: it defines the level of description of the terms contained in the
ontology.
- Degree of formalism: it describes the level of formality of the language used in the
ontology.
- Technique of modeling: it describes the used technique shape the ontology, for
example, using logical first-class, logical description or languages and others specific
techniques.
- Extensibility: capacity to adapt the ontology to the necessities of the project to be for
the inclusion, removal or alteration of terms and relationships of the model without loss
of the original bonds.
- Licensing: it must allow the use, extension and distribution of the new suitable model of
the original.
- Type: classification of the ontology according to criteria of Lassila and McGuinness that
group the ontologies in controlled vocabulary, glossary, thesaurus, formal, hirarquia
hierarchy informal, frames, restriction of value and logical restriction.
It is important to point out that beyond the criteria mentioned for the election of an existing
ontology, the team decided for verifying the tack of such ontology to the principles of Gruber
for the definition of ontological models, which are: clarity, coherence, extensibility, consistency
in the use of common vocabulary, not dependence of specific symbologies.
76
Finally, in what it says respect to the target or largeness of the model to be created/evolved,
the team of the project defined the following visions, domain or dimensions of the study:
-
Anatomy of the feminine breast: ontological description of the external and internal
structure of the feminine breast.
- Procedure of the fine needle aspiration cytology applied in breast: ontological
description of the procedure of the fine needle aspiration cytology applied in breast.
- Medical equipment: it understands the ontological description of the equipment and
materials used in the procedure of the fine needle aspiration cytology applied in breast.
Figure 4.3.18 illustrates the preliminary vision of the problem target to be treated. It is
important to point out, however, that such vision will be reviewed and detailed when of the
creation of the model in order to define the structure of the shaped domain, its components
and the relationship between them.
Figure 4.3.18. Problem domain.
Ongoing work
Simulation of surgery
The architecture proposal for this stage will have as base the model of that one developed for
the project of the FINEP (MELO, 2007), that is, made use in Figure 4.3.19, that it searched to
integrate different technologies. Between them, the interface of haptic processing (Phantom),
formalization of the anatomy human being knowledge based on ontologies, graphical
interface based in Java and RV based on the CHAI3D.
77
Figure 4.3.19. Architecture.
As the proposal of this stage consists of the implementation of a surgical simulator for the
training of fine needle aspiration cytology applied in breast procedure, some new concepts
need to be integrated in the RV. Between them we have physical modeling and object
deformation. In this direction, some techniques of geometric deformation had been studied
and implemented (i.e., Bézier, B-Splines), as show Figures 4.3.20.i and 4.3.20.ii. For the next
stage, others will be studied and implemented when elapsing of the development of the
architecture proposal for the MSE.
With intention to add a bigger realism to the simulation, the methods of physical deformation
also are being studied and implemented, that is, Mass-Spring and Finite Elements.
78
(I)
(II)
Figure 4.3.20. (i) Bézier surfaces and (II) Curve of B-splines.
4.3.6) Virtual and augmented reality for training and assessment of life support
procedures
Pre-surgical training sytem
Medical education has undergone major changes along the years. Until the mid 19th-century,
academic medicine was based on the observation of facts; possible treatments were derived
from this observation. Currently, the teaching of Medicine involves the continuous search for
new methods to accommodate novel requirements. Parallel to this search, one of the biggest
challenges for the teaching of Medicine is the evolution of technology. In surgery, for example,
the advent of laparoscopic techniques has translated into the need for abilities that are quite
different than those applied in conventional procedures, and the training of surgeons has
relied increasingly more on virtural reality (VR) tools.
Considering this context, the objectives of the present work were to develop a VR
environment for the teaching of surgery at the undergraduate level, to reflect on the impact of
this type of tool for the education of medical students and to consider the feasibility of
establishing a graduate program focused on the development VR environments. For that, a
multidisciplinary team was formed, and a pre, trans, and postsurgical VR environment was
developed. This environment allows students who have never been in contact with a surgical
unit to complete various tasks relating to the performance of a thoracotomy and to learn about
the rules and routines associated with a surgical unit. A preliminary assessment with 15
medical students and 12 professionals (five physicians, five computer scientits, and two
education specialists) showed that both these groups considered the overall virtual
experience as satisfactory or very satisfactory (scores 7-9 in a 10-point scale). The VR
environment developed in this study will serve as a basis for other applications, such as
additional surgical modules to replace the thoracotomy module. The environment may also be
adapted for the training of other target publics, such as nurses or nursing studens, nursing
assistants or others. Despite the complexities associated with the development of VR tools,
The undeniable need to provide students with more opportunities for training, the inexorable
ingrowth of technology into medicine, and the importance of integrating all this into an
79
opportunity for education, lead to the conclusion that this project was successful and that the
research line established with the present work is extremely promising.
The pictures in Figure 4.3.21 show the virtual environment created to the project.
Figure 4.3.21. Virtual Environment for Medical Tranning.
Image segmentation systems
In this context we have worked in two different projects. The first one deals with semi
automatic tool for teeth segmentation in CT imagens. The main goal in this case is to allow a
three-dimensional reconstruction of tooth structures. Using the developed system it is
possible to do automatic and manual segmentation from DICOM images and generates as
output a 3D file. This file can be loaded in CAD (Computer Aided Design) systems that are
able to create the three-dimensional models from plane sections. Figure 4.3.22 shows the
some imagens from the system.
Figure 4.3.22. Tooth segmentation system.
The second project in this subject deals with lung segmentation and automatic detection of
lung tumors. Lung diseases victimize a huge amount of people by year. Manual diagnosis
requires a vast number of tomographical images analysis by radiologists. An alternative to this
is the computer’s aid to this large scale of analysis, possibly related to subjectivity due to
emotional factors. The computer provides a “second-opinion” to the radiologist, notifying
him in case of something suspicious in the analysis. The present work proposes a
computational system that uses co-occurrence matrixes descriptors, for lung diseases aided
detection. Using this information, the system is instructed with the values of the intervals of
80
these descriptors, for each pattern category provided by the doctor. These values consist in
the basic rule to define if a lung region is highlighted or not, in a thorax image. In the final
tests execution, the capacity of the system, as a proposal of aided detection of lung diseases,
was evaluated, and the results were satisfactory.
Figure 4.3.23 shows the some images from the system.
Figure 4.3.23. Lung segmentation system.
Medical emergency training
The area of Medical Qualification in Life Support training is being constantly improved.
However, many problems still have to be faced in the training sessions. During these sessions
the students or physicians can repetitively practice patient care procedures in simulated
scenarios using anatomical manikins, especially designed for this type of training. Current
manikins have several resources incorporated to allow and facilitate qualified training, such as
pulse, arrhythmia and auscultation simulator. However, some deficiencies have been detected
in the existing LS training structure. For example: automatic feedback to the students in
consequence of their actions on the manikin, images like facial expressions and body injuries,
and their combination with sounds that represent the clinical state of the patient. The main
goal of the ARLIST project is to qualify the traditional training environment currently used for
LS training, introducing image and sound resources into the training manikins. Thought these
features we can simulate some aspects such as facial expressions, skin color changes and
scratches and skin injuries through image projection over the manikin body, and also play
sounds like cries of pain or groans of an injured man. Figure 4.3.24 shows some images from
the ARLIST manekin.
Multi-touch immersive volume visualization device
This effort consists on developing a study of interaction tasks contained in medical imaging
visualization systems. Based on this study, a device was developed for interaction in medical
imaging visualization virtual environments. This device combines the advantages of direct
mapping of the actions of the user to interact in a virtual environment with a method for input
values through a multi-touch sensor. As a result, despite the users spend more time to
complete the tasks in the virtual environment, in terms of accuracy the device was as effective
as the desktop interface.
81
Figure 4.3.24. ARLIST Manekin.
Figure 4.3.25 shows the device on the left and the cutting task on the right.
Figure 4.3.25. Multi-touch device and its graphical representation inside the immersive
environment.
Virtual environments for post-traumatic stress disorder treatment
This project is developing a study on how virtual environments can be used to treat patients
that have developed Post-traumatic Stress Disorder after a bank robbery. Figure 4.3.26
82
shows and image of the virtual environment used for the treatment.
Figure 4.3.26. Virtual Bank Scenario.
Evaluating dental drilling procedures
In order to help the evaluation the learning process of dental drilling procedures we have
developed, in cooperation with the School of Dental Medicine of PUCRS, an image
processing tool (hardware and software) for facilitated the students to visualize their class
work.
The system captures teeth images from many angles and presents these images to the
student, allowing him to see in detail how the performed drilling was. Besides that, the system
can measure the drilled teeth to get more accurate feedback.
The Figure 4.3.27 show on the left the evaluation software and on the right the hardware used
to capture the teeth images.
Figure 4.3.27. Dental evaluation tool.
Gait analysis
This project aims at evaluating the gait of a person running on a treadmill, with low cost
hardware, this project is developing an image processing software to capture, processing and
evaluate the position of legs and arms joints.
In order to facilitate the image analysis we attach small highly reflective infrared fabric sticks
to the body joints and illuminate the environment with infrared lights.
Figure 4.3.28 shows on the left an image captured using the described setup, and on the
83
right, a screenshot form the developed software.
Figure 4.3.28. Gait analysis software.
Supporting tool for blind people
Based on augmented reality techniques we have developed software to help blind people to
find places and objects using a smartphone.
The software captures an image using the smartphone camera, recognizes special tags
placed in the environment and creates audible messages to the user, helping him/her to find
the objects or places he/she wants.
4.4) Area 4: Information systems in health
4.4.1) Acute miocardial teleconsultation and monitoring system (AToMS)
Acute Myocardial Infarction (AMI) is among the leading causes of death and physical
incapacity worldwide. Ischemia—a sudden reduction or interruption of the blood flow to a
tissue because of an arterial constriction or obstruction—is one of the most common causes
of AMI. Therefore, ischemia is a preferential target for the development of therapeutical
procedures.
Currently, thrombolytic therapy is considered to be one of the most efficient therapeutical
procedures to resume the blood flow in a previously obstructed artery. This therapy consists
of administering thrombolytic drugs that dissolve the arterial obstruction so that the blood flow
to the patient’s heart muscle can be restored, thus preventing further health damage. In the
medical literature, the thrombolytic therapy is often compared with angioplasty— a highly
adopted procedure in ischemic AMI cases—that involves surgical intervention to unblock the
obstructed artery. Crucially, the treatment with thrombolytics is far less costly than
angioplasty; for instance, in Brazil less than 3% of AMI patients can afford an angioplasty
intervention. Moreover, there is unequivocal benefit in terms of morbidity and mortality for
prompt AMI treatment with thrombolytics in comparison with angioplasty. Nevertheless, to
avoid some possible hazards of thrombolysis, caution should be taken to evaluate if an AMI
patient is eligible for this treatment. Typically, a cardiologist may diagnose an AMI patient to
be thrombolytic-eligible based on an electrocardiogram (ECG) and some information on the
recent medical history of the patient.
The sooner an AMI patient has its blood flow restored after the onset of infarction symptoms,
the better the chances of avoiding severe damage or death of its heart muscle.
Recommended target delays to initiate a thrombolytic therapy go between 30 and 90 minutes
84
of the patient calling for medical treatment. This scenario demands an efficient communication
system that enables highly-coordinated actions among its participants.
Recent advances in wireless communication technology enable envisaging novel ubiquitous
healthcare systems that simplify the monitoring and treatment of patients, although much of
the research effort in the area regards the use of such technology in the context of personaland local- area networks only. Of special interest to the INCT-MACC is the adoption of
wireless technology in the field of healthcare systems.
Based on the aforementioned considerations, we have developed a telemedicine system
called AToMS (AMI Teleconsultation & Monitoring System). AToMS intends to enable the
decision support and auditing when paramedics provide pre-hospital emergency services to
AMI patients with the administration of thrombolytic therapy.
The design of AToMS adopts appliances for ECG monitoring and PDAs/netbooks with
integrated communication capabilities (e.g. supporting GPRS, Bluetooth, WiFi, and WiMax). A
paramedic may use AToMS to communicate with a cardiologist in order to decide if an AMI
patient is thrombolytic-eligible or not. For instance, this communication may include the
exchange of electronic health record (EHR) information and digitalized ECG results through a
fully auditable system.
Note that most healthcare systems are limited to teleconsultation or to the application of
wireless healthcare appliances within hospitals to ease patient management. In contrast,
AToMS intends to bring an efficient treatment to AMI patients in a fully controlled and
professionally-assisted way to the location where the first emergency assistance is delivered,
thus saving precious time in providing appropriate treatment.
Note also that, as being location-independent as long as some wireless technology is
available for communication, AToMS may be used by paramedics in ambulance services or by
non-specialized physicians in remote regions, for instance. Furthermore, the communication
system may be used for continually monitoring the AMI patient while it is transferred to a
coronary (cardiac) care unit (CCU), enabling a specialized team to be appropriately prepared
to promptly take care of the patient at its arrival.
Human intervention in the AToMS system is usually restricted to requests for consultation from
paramedics and replies to such a consultation from cardiologists. A request for consultation
comprises data such as digitalized ECG and the patient’s recent medical history. Such a
history can be either retrieved from a coordination server (see below), or promptly filled in by
the paramedic, or both. The system conveys the patient’s data to a cardiologist, which can
then take proper decisions on whether the patient is thrombolytic-eligible or not.
Figure 4.4.1 presents the AToMS’s main actors and components, which are explained below.
85
Figure 4.4.1. AToMS overall architecture
In the typical scenario illustrated in this figure, paramedics handle mobile devices (e.g. PDAs,
mobile phones, or laptops) that communicate with ECGs. Data gathered from an ECG is
collected by the paramedic’s mobile device to be sent as part of the patient data to an
available cardiologist through a wireless wide-area network (WWAN), e.g. GPRS (step 1 in
Figure 4.4.1). Figure 4.4.2 illustrates some screenshots from the AToMS module that runs in
the mobile devices at the moment a paramedic is gathering patient data.
Figure 4.4.2. Paramedic gathering patient data in the AToMS system.
The AToMS system directs the collected data from the AMI patient to a virtual teleconsultation server (step 2 in Figure 4.4.1), which is responsible for putting the paramedics
promptly in contact with an available cardiologist in case of an AMI incident. Such a virtual
86
server thus allows the system to have its cardiologists scattered around (e.g. in their own
offices or in different hospitals) instead of physically available in a typical call center. Figure
4.4.3 illustrates a screenshot from the web-based AToMS module to which a cardiologist has
access during a teleconsultation.
Figure 4.4.3. Web-based cardiologist view of a teleconsultation in the AToMS system.
Our proposed system enables a decision support tool that allows a cardiologist to decide on
the applicability of thrombolytic therapy in an AMI patient by a paramedic at the local where
the emergency team reached this patient (step 3 in Figure 4.4.1). This may significantly
reduce the delay between the onset of symptoms and the effective application of the
thrombolytics. It should be remembered that there is usually a reasonable delay between the
onset of symptoms and the request for professional aid. Hence, mitigating the delay between
the arrival of the emergency team at the location of the AMI patient and the thrombolysis
administration may be decisive for a successful intervention. Similarly, if we consider remote
regions without easy and fast access to a CCU, the remote support by a cardiologist on the
applicability of the thrombolytic therapy on a given AMI patient may be crucial to comply this
eventual application of thrombolytics within the recommended target delays between the call
for aid and effective intervention. Further, decisions taken by a cardiologist on the thrombolytic
eligibility can trigger a request for transferring the AMI patient to a CCU (step 4 in Figure
4.4.1).
The coordination server then indicates the nearest CCU that is able to properly proceed with
the treatment and also generates a transfer notification to this CCU. The emergency team
then receives orientation on the thrombolytic eligibility of the AMI patient and on the nearest
CCU able to receive this patient (step 6). Meanwhile, the nearest CCU receives the transfer
notification about the imminent arrival of an AMI patient (step 6’). Furthermore, while the AMI
patient is being displaced to the nearest available CCU, the patient can be continuously
monitored. Based on the monitoring information and the patient’s EHR, a cardiologist at the
CCU keeping track of the AMI patient can be promptly notified about any alteration on the
patient’s state and prepare in advance the CCU team for the imminent arrival of this particular
patient.
Note that the AToMS system is fully auditable. The coordination server records all data flows
87
in the system (EHRs, replies to consultations, and so on), building up a database that can be
later used for statistical data analysis and independent supervision of decisions taken by both
paramedics and cardiologists, for instance.
The AToMS system is currently getting commissioned in the University Hospital (HUCFF) of
the Federal University of Rio de Janeiro (UFRJ). Figure 4.4.4 shows some of the on-site tests
currently being conducted in this hospital.
Figure 4.4.4. (a) AToMS prototype getting commissioned in the HUCFF/UFRJ.
88
Figure 4.4.4. (b) AToMS prototype getting commissioned in the HUCFF/UFRJ (continued).
Ongoing work
The future tasks to be performed in this area involve the use of innovative software
development techniques to be employed for mobile healthcare systems in different medical
specialties, with emphasis on emergency healthcare and syndromic surveillance. In particular,
the following topics are currently under investigation:
1. Aspect-oriented programming (AOP) techniques, with a focus on the AspectJ
language, which allows incorporating new facilities to Java applications in a highly
modular way. The AspectJ language has already been employed, as a proof-ofconcept, to incorporate cryptographic facilities to the AToMS system.
2. Software product line (SPL) engineering, with the aim of identifying common features
among different emergency healthcare and syndromic surveillance systems.
3. Model-driven software development (MDSD) approaches, to allow the creation of
domain-specific languages (DSLs) for emergency healthcare and syndromic
surveillance systems, as well as (semi-)automatic code generation based on such
languages.
4.4.2) Syndromic surveillance decision support system for epidemic diseases
The efficiency of epidemiological surveillance actions has the potential to reduce the
incidence of several diseases that are considered global health priorities. These diseases can
only be controlled if the health professionals at the point-of-care can accurately identify
existing cases, thus enabling them to implement the appropriate prevention and control
measures. Typically, the earlier the recognition of the cases, the more effective the prevention
is.
In order to ensure the effectiveness of disease control and prevention, the time interval
between patient admission at the point-of-care and diagnostic suspicion must be reduced. In
addition, it is necessary to improve the diagnosis of severe and atypical cases of epidemic
and endemic diseases.
The traditional approach to epidemiological surveillance adopts retrospective case definitions,
89
which are not adequate for the current emerging-reemerging pandemic scenarios. The most
appropriate approach, called syndromic surveillance, identifies the early suspicion of epidemic
or endemic cases based on a small set of signs and symptoms and a simple diagnostic
algorithm that can be implemented in small healthcare applications.
Healthcare applications based on traditional data models are not interoperable and have high
maintenance costs. These issues have a significant negative impact on the applicability of
these applications for the dynamic and emergent situations found in epidemiological
surveillance. In fact, the development of healthcare applications is a complex challenge,
especially due to the high number of concepts in constant evolution, which renders the reach
of consensus rather difficult.
Some solutions to those problems have been proposed over the last two decades, with the
main solution involving the separation between the domain model and the data persistence.
In this context, we employ a multilevel modeling approach proposed by the openEHR
Foundation (http://www.openehr.org). The original openEHR specifications define two model
levels: the Reference Model, defining the generic data types and data structures, and a
Domain Model, defined by constraints (archetypes) on the Reference Model. Figure 4.4.5
depicts the main concepts that comprise the openEHR specifications.
Figure 4.4.5. The openEHR health computing platform.
Healthcare applications based on multilevel modeling approaches such as openEHR are
more easily interoperable and can be deployed on any hardware, including mobile devices.
The adoption of a common reference and domain model for different applications enables a
90
transparent and shareable interface to geographic information systems and statistical timeseries analysis tools, which can analyze information collected among several remote
systems.
The development of decision support algorithms based on a common domain model enables
the reutilization of the decision rules across different implementations. Thus, at the point-ofcare, the control measures can be implemented immediately, blocking the disease
transmission chain, and at the governance level, wider areas can be monitored and priority
regions can be identified for disease control.
To date, we have implemented the skeleton of the Reference and Domain Models of the
openEHR specifications in Python as object models using the Zope application server and
framework Grok. We have also developed the 28 demographic archetypes (for the
identification of persons, organizations) that are available in the repository of the openEHR
foundation (http://www.openehr.org/knowledge). A prototype decision support system has
been also implemented using archetypes through PyCLIPS, an extension module for Python
that embeds all the functionality of CLIPS (C Language Inference System), a decision engine
for decision support systems based on openEHR specifications.
Ongoing work
The developments of the INCT for the next year in this topic include:
1. Development of clinical archetypes (about 20) for the decision support system.
2. Implementation of a Template Model defined as constraints over the Domain Model of
the openEHR specifications, to allow the automatic generation of graphical user
interfaces (GUIs) to the decision support system. The very specification of a Template
Model is not complete yet in the openEHR foundation. Therefore, possibly we will have
to create our own template model specification.
3. Migrate the AToMS system to the multilevel approach. Work has already been initiated
in order to check whether the available archetypes in the repository of the openEHR
foundation are able to deal with AToMS data model and which changes should be
made to some archetypes in order to make them compatible with AToMS.
4.4.3) QoS Support for Health Information Systems over Wireless Mesh Networks
Wireless devices are commonly used by clinical specialists. Wireless networks can be used,
for example, to access remote health information systems, to exchange patient data with
another clinical specialist to get a second opinion about a patient case or even to obtain a
remote diagnosis made by a specialist, such as the AToMS system proposal.
In intrahospital environments, WiFi infrastructured networks can be used to provide access to
those devices. However, outside the hospital environment, different technologies, such as
GPRS, WiMax and wireless mesh networks may be used. Our goal is to investigate the use of
wireless mesh networks (WMN) as a communication infrastructured for health information
systems. Wireless mesh networks are built of wireless routers communicating with IEEE
802.11 links in the ad-hoc mode. WMNs use multihop forwarding and dynamic routing
protocols in order to forward packets to their destination. Figure 4.4.6 illustrates an example of
WMN. Client devices can connect to mesh routers using standard WiFi interfaces. WMN is a
91
promising technology to provide broadband access because they are easy to deploy, low-cost
and fault-tolerant.
INTERNET
Laptop
PDA
Mesh Router
Mesh Router
Mesh Router
IP Phone
Mesh Router
Mesh Router
Gateway
IP Phone
Laptop
Mesh Router
Mesh Router
Mesh Router
Mesh Router
PDA
Laptop
PDA
IP Phone
Figure 4.4.6. Wireless mesh network.
MídiaCom Lab has a huge experience on the development of wireless mesh networks. We
have developed a WMN solution based on OpenWRT that can be installed in low-cost
routers, such as linksys WRT54G or Ubiquiti bullet. WMNs developed with our solution were
deployed in the cities of Niterói/RJ, Belém/PA, Brasília/DF and Curitiba/PR. We maintain a
WMN testbed in the Federal Fluminense University (UFF) to test new solutions and protocols
in order to enhance network performance. Figure 4.4.7 illustrates a 10-node network in UFF.
Our mesh solution is based on an extension of the OLSR routing protocol, called OLSR-ML
(minimum loss), which computes best routes based on minimum loss paths. In the figure,
colors indicate the wireless link quality, where blue links are the best ones.
Figure 4.4.7. UFF WMN testbed.
92
With the increasing use of multimedia applications and also the use of health information
systems ober WMNs, the need for quality of service provided by the network is crucial. There
are several proposals to provide QoS support in WMNs in the literature, considering different
network layers, such as the network layer, the MAC layer and cross-layer proposals. We are
investigating different possible solutions in order to guarantee a better service for critical
applications such as e-health and multimedia systems.
Ongoing work
The current developments of the INCT in this topic include:
1. Development of network layer solutions based on tc (linux traffic control) for providing
traffic differentiaton over WMNs.
2. Development of MAC layer solutions based on IEEE 802.11e for providing QoS
support over WMNs.
3. Performance tests of the AToMS system over WMNs.
4. Development of integrated network management and monitoring tools for easy
deployment, maintenance and operation of WMNs.
4.5) Area 5: Distributed computing cyberenvironments
4.5.1) Hemolab and Imagelab on a private cloud
The concept of a private cloud is associated with the use of a Cloud middleware on private
computational resources in order to execute sensible software on a private environment and
displaying the application interface remotely on a desktop or mobile device, and to avoid
access restrictions also allowing a web interface.
Ongoing work
The future tasks to be performed in this area involve the use of innovative software
development techniques to be employed for systems in different medical specialties, with
emphasis on diagnosis and pre-surgery evaluation. In particular, the following topics are
currently under investigation:
1. Usage of virtualization considering different virtual machine implementations and the
corresponding performances due to different scheduling strategies and processor
architectures involved.
2. Evaluating a middleware to be used, such as: Nimbus/Globus, Eucaplitus,
OpenNebula, Aneka, among others.
93
Figure 4.5.1. Open-source solutions for cloud computing.
3. Evaluation of web interfaces as an alternative with minimum access restrictions.
4. Networking infrastructure necessary to support remote displaying of the medical
applications demand with some possible structuring of the access points to allow the
necessary backbone speed to grant users’ access with the needed performance.
Figure 4.5.2. OpenNebula structure.
4.5.2) Performance and deployment evaluation of applications
We need to have available a set of running applications and explore/improve scheduling
strategies including the use of scientific workflows as an additional scheduling strategy.
Ongoing work
The developments of the INCT for the next year in this topic include:
1. Performance and Deployment Evaluation of Parallel Applications in a Cloud
94
Figure 4.5.3. Performance and deployment evaluation of parallel applications in a cloud.
2. Running hemodynamics simulations
Figure 4.5.4. Modeling behavior of the cardiovascular system on EELA Grid.
3. EasyGrid Enabling of Malleable Iterative Parallel MPI Simulations
Simulation is a tool of utmost importance in diverse scientific fields as we seek to gain a better
understanding of the behavior that governs the physical world around us. The solution,
typically computed as displacements, pressures, temperatures, or other physical quantities
associated with grid points, mesh nodes, or particles, represent the states of the system being
simulated at a given time and often depends on the values of these states at earlier points in
time.In most cases, the high computational requirements of these simulations can only be met
95
by large scale parallel computing systems. The tightly coupled nature means that iterative
simulations have the characteristic of running at the speed of the slowest process and thus
are most suited to running on systems of identical nodes. Although Computational Grids and
Clouds offer unprecedented amounts of computational power at low cost, and are becoming
increasingly widespread and accessible, their architectural characteristics have hindered their
adoption especially for iterative simulations. Not only are these environments composed of
heterogeneous resources, but the power available from any given resource may vary and is
not generally guaranteed. This talk discusses the challenge of how to permit the same tightly
coupled parallel simulations, optimized for uniform, stable, static computing environments, to
execute equally efficiently in environments which exhibit the complete opposite
characteristics. Using the N-body problem as a case study, both a traditional and a grid
enabled malleable MPI implementation of the popular ring algorithm are analyzed and
compared. Results with respect to performance show the latter approach to be competitive on
a homogeneous cluster but significantly more capable of harnessing the power available in
heterogeneous and dynamic environments.
4. Workflows for Design and Execution of Health Applications
Health Applications in e-science are frequently modeled as a set of actions performed, in a
specific order, upon a data set, in order to achieve the expected result. The workflow model
fits well into this operation schema and is able to model the complexities of such applications,
offering interfaces to the design and visualization of data and actions, as well as means to
control the action's execution in an efficient and distributed way. Our goal is to design and
implement a free software platform for creating and executing health-care applications based
on the workflow paradigm. We propose an architecture with a clear definition of its
components and layers and a Workflow Management System based on cloud computing –
CloudMedWF. The platform should provide a graphical user interface where the user can
build his application using the provided abstract processing services, seen as building blocks.
Each block has a well-defined data interface, and can be implemented considering the
various technologies usually available on an heterogeneous parallel computing environment,
such as multi-core general-purpose processors (CPUs), multi-core Graphic Processing Units
(GPUs) and special devices like Field Programmable Gate Arrays (FPGAs). Once the user
built an application, he might use the platform to execute it upon private or commercial
clouds, as well as on shared grid infrastructures. The instantiation of each abstract processing
service (block) used by the application is controlled by a scheduling algorithm, that considers
factors such as available hardware, load balance, synergism on the data flow, among others,
to decide where and how to run. At present we developed a diskless linux-based system for
heterogeneous processing servers, simplifying the management tasks of the computational
park, and implemented some image processing filters, notably the Canny edge detection filter
for the ITK Toolkit, using multicore CPUs and NVidia GPUs, to evaluate the efficiency and
limitations of each hardware type, so as to feed the scheduling algorithm.

Service discovery

High performance in CUDA

High performance with OpenMP and SSE

Volumetric mesh generator
96
Figure 4.5.5. Overview of the proposed system.
5. Scientific Model Management System

Scientific hypotheses expression

Scientific model specification

Computational model definition and evaluation

Declarative workflow specification and automatic workflow instantiation and
evaluation

Simulation results management and hypotheses validation

Uses QEF (Query Engine Framework) as workflow evaluation platform
Query
Manager
DataSource
Manager
Query Engine
Transaction
Monitor
Plan Manager
Distributed
Manager
Catalog
Manager
Operator
Factory
Web Service
Factory
Cache
Manager
Query
Optimizer
G2N
Data sources
Management
Execution &
Workflow
Management
Distribution &
Parallelization
Management
Figure 4.5.6. QEF Architecture.
97
5) Science and technology results
The highlights referred to in the previous section have been somehow materialized through
the following scientific publications and technological innovations by associated and
collaborator laboratories within the INCT-MACC.
5.1) Publications in journals
P1) Blanco, PJ, Urquiza, SA, Feijóo, RA. On the potentialities of 3D-1D coupled models in
hemodynamics simulations. Journal of Biomechanics, 42, 919-930, 2009
P2) Blanco, PJ, Feijóo, RA. Sensitivity analysis in kinematically incompatible models.
Computer Methods in Applied Mechanics and Engineering, 198, 3287-3298, 2009.
P3) Leiva, JS, Blanco, PJ, Buscaglia, GC. Iterative strong coupling of dimensionally
heterogeneous models. International Journal for Numerical Methods in Engineering, 81,
1558-1580, 2010.
P4) Blanco, PJ, Urquiza, SA, Feijóo, RA. Assessing the influence of heart rate in local
hemodynamics through coupled 3D-1D-0D. Communications in Numerical Methods in
Engineering, Accepted for publication, 2010.
P5) Blanco, PJ, Pivello, MR, Urquiza, SA, Souza e Silva, NA, Feijóo, RA. Coupled models
technology in multi-scale computational hemodynamics. International Journal of Biomedical
Engineering and Technology, Accepted for publication, 2009.
P6) Ausas, R, Dari, E, Buscaglia, GC. A geometric mass–preserving redistancing scheme for
the level set function. International Journal for Numerical Methods in Fluids. Accepted for
publication, 2010.
P7) Ausas, R, Jai, M, Buscaglia, GC. A mass-conserving algorithm for dynamical lubrication
problems with cavitation. Journal of Tribology, v. 131, p. 031702-1, 2009.
P8) Ausas, RF, Sousa, FS, Buscaglia, GC. An improved finite element space for
discontinuous pressures. Computer Methods in Applied Mechanics and Engineering, v. 199,
p. 1019-1031, 2010.
P9) Berger, M, Nonato, LG, Pascucci, V, Silva, CT. Fiedler trees for multiscale surface
analysis. Computers & Graphics, v. 34, p. 272-281, 2010.
P10) Ausas, RF, Sousa, FS, Buscaglia, GC. An improved finite element space for
discontinuous pressures. Computer Methods in Applied Mechanics and Engineering, v. 199,
p. 1019-1031, 2010.
P11) Bombardelli, F, Cantero, M, Garcia, M, Buscaglia, GC. Numerical aspects of the
simulation of discontinuous saline underflows: The lock-exchange problem. Journal of
Hydraulic Research, v. 47, p. 000, 2009.
P12) Cruz, PA, Tomé, MF, Stewart, I, McKee, S. A numerical method for solving the dynamic
three-dimensional Ericksen-Leslie equations for nematic liquid crystals subject to a strong
magnetic field, Journal of Non-Newtonian Fluid Mechanics, vol. 165, pp. 143-157, 2010.
P13) Cuadros-Vargas, AJ, Lizier, MAS, Minghim, R, Nonato, LG. Generating Segmented
Quality Meshes from Images, Journal of Mathematical Imaging and Vision, 2009, 33:11–23.
98
P14) Cuminato, JA, Fitt, AD, Mphaka, MJS, Nagamine, A. A singular integro-differential
equation model for dryout in LMFBR boiler tubes, IMA Journal of Applied Mathematics
Advances, p. 1-22, 2009.
P15) Cuminato, JA, MacKee, S. A note on the eigenvalues of a special class of matrices,
Journal of Computational and Applied Mathematics, Accepted for publication, 2010.
P16) Eler, DM, Nakazaki, MY, Paulovich, FV, Santos, DP, Andery, GF, Oliveira, MCF, Batista
NJ, Minghim, R, Oliveira, MCF. Visual analysis of image collections. The Visual Computer, v.
25, p. 923-937, 2009.
P17) Ferreira, VG, Kaibara, MK, Lima, GAB, Sabatini, MH, Mancera, PFA, McKee, S. A robust
TVD/NV-based upwinding scheme for solving complex fluid flow problems, International
Journal for Numerical Methods in Fluids. Accepted for publication, 2010.
P18) Gois, JP, Buscaglia, GC. Resampling Strategies for Deforming MLS Surfaces. Computer
Graphics Forum, Accepted for publication, 2010.
P19) Lizier, MAS, Martins Jr., DC, Cuadros-Vargas, AJ, Cesar Jr., RM, Nonato, LG.
Generating segmented meshes from textured color images. Journal of Visual Communication
and Image Representation, v. 20, p. 190-203, 2009.
P20) Moraes, ML, Maki, RM, Paulovich, FV, Rodrigues F, Ubirajara P, de Oliveira, MCF; Riul,
A, de Souza, NC, Ferreira, M, Gomes, HL, Oliveira, ON . Strategies to Optimize Biosensors
Based on Impedance Spectroscopy to Detect Phytic Acid Using Layer-by-Layer Films.
Analytical Chemistry (Washington), v. 82, p. 3239-3246, 2010.
P21) Nagamine, A, Cuminato, JA. A collocation method for solving singular integro-differential
equations, BIT, Accepted for publication, 2010.
P22) Rangarajan, R, Lew, A, Buscaglia, GC. A Discontinuous-Galerkin-based Immersed
Boundary Method with Nonhomogeneous Boundary Conditions and its Application to
Elasticity. Computer Methods in Applied Mechanics and Engineering, v. 198, p. 1513-1534,
2009.
P23) Siqueira, JR, Maki, RM, Paulovich, FV, Werner, CF, Poghossian, A, de Oliveira, MCF,
Zucolotto, V, Oliveira, ON, Schoning, MJ. Use of Information Visualization Methods
Eliminating Cross Talk in Multiple Sensing Units Investigated for a Light-Addressable
Potentiometric Sensor. Analytical Chemistry (Washington), v. 82, p. 61-65, 2010.
P24) Siqueira,M, Xu, D, Gallier, J, Nonato, LG, Morera, DM, Velho, L. A new construction of
smooth surfaces from triangle meshes using parametric pseudo-manifolds. Computers &
Graphics, v. 33, p. 331-340, 2009.
P25) Tomé, MF, Gilcilene S, Sanchez, MA, Alves, FT, Pinho, Numerical solution of the PTT
constitutive equation for unsteady three-dimensional free surface flows, Journal of NonNewtonian Fluid Mechanics, vol. 165, pp. 247-262, 2010.
P26) Walters, K, Tamaddon-Jahromi HR, Webster MF, Murilo FT, McKee, S. The competing
roles of extensional viscosity and normal stress differences in complex flows of elastic liquids,
Korea-Australia Rheology Journal, vol. 21, pp. 225-233, 2009.
P27) da Silva, LA, Hernandez, EDM, Moreno, RA, Furuie, SS. Cluster-based Classification
using Self-Organizing Maps for Medical Image Database. International journal of innovative
99
computing and applications, v. 2, p. 13-22, 2009.
P28) Massato Kobayashi, LO, Furuie, SS. Proposal for DICOM Multiframe Medical Image
Integrity and Authenticity. Journal of Digital Imaging, v. 22, p. 71-83, 2009.
P29) Massato Kobayashi, LO, Furuie, SS, Barreto, PSLM. Providing Integrity and Authenticity
in DICOM Images: a Novel Approach. IEEE Transactions on Information Technology in
Biomedicine, v. 13, p. 582-589, 2009.
P30) de Sá Rebelo, M, Hummelgaard Aarre, AK, Clemmesen, K-L, Soares Brandão SC,
Giorgi, MC, Meneghetti, JC. Gutierrez, MA. Determination of three-dimensional left ventricle
motion to analyze ventricular dyssyncrony in SPECT images. EURASIP Journal on Advances
in Signal Processing, EURASIP Journal on Advances in Signal Processing. Volume 2010
(2010), Article ID 290695, 9 pages.
P31) Trilha Junior, M, Fancello, EA, Roesler, CRM, More, ADO. Simulação numérica
tridimensional da mecânica do joelho humano. Acta Ortopédica Brasileira v. 17, p. 1, 2009.
P32) Rodrigues, PS, Giraldi, GA. Improving the Non-Extensive Medical Image Segmentation
Based on Tsallis Entropy. Pattern Analysis and Applications, , Accepted for publication 2010.
P33) Thomaz, CE, Giraldi, GA. A new ranking method for principal components analysis and
its application to face image analysis. Image and Vision Computing, v. 28, p. 902-913, 2010.
P34) Seixas, FL, Conci, A, Saade, DCM, de Souza, AS. Intelligent automated brain image
segmentation. International Journal of Innovative Computing and Applications, v. 2, p. 23-33,
2009.
P35) Damasceno, JR, da Silva, MP, Seixas, FL, de Souza, AS, Saade, DCM. Segmentação
Automática e Análise da Volumetria de Substâncias e Estruturas Encefálicas em Imagens de
Ressonância Magnética para Aplicações de Diagnóstico. Revista Eletrônica de Iniciação
Científica, v. 2, p. 1-19, 2009 (in Portuguese).
P36) Moraes, R.M.; Machado, L.S. Gaussian Naive Bayes for Online Training Assessment in
Virtual Reality-Based Simulator. Mathware & Soft Computing, v. 16, p. 123-132, 2009
P37) Machado, L.S.; Moraes, R.M.; Souza, D.F.L; Souza, L.C.; Cunha, I.L.L. A Framework for
Development of Virtual Reality-Based Training Simulators. Studies in Health Technology and
Informatics, v. 142, p. 174-176. IOSPress, 2009
P38) Machado, L.S.; Moraes, R.M. Qualitative and Quantitative Assessment for a VR-Based
Simulator. Studies in Health Technology and Informatics, v. 142, p. 168-173. IOSPress, 2009
P39) Oliveira, A.C.M.T.G., Nunes, F.L.S. Building a Open Source Framework for Virtual
Medical Training. Journal of Digital Imaging, p.1-15, 2009
P40) Rieder, R., Raposo, A., Pinho, M.S. A methodology to specify three-dimensional
interaction using Petri Nets. Journal of Visual Languages and Computing (Online). , v.1, p.1 –
20, 2010.
P41) Pinho, M.S., Bowman, D.A., Freitas, C.M.D.S. Cooperative Object Manipulation in
Collaborative Virtual Environments. Journal of the Brazilian Computer Society, v.14, p.53 –
67, 2008
P42) Medeiros, Joao Paulo S., Jr., Agostinho M. Brito, Pires, Paulo S. Motta, SANTOS, S. R.
100
(2009) Advances in network topology security visualization, International Journal of System of
Systems Engineering (IJSSE). , v.1, p.387-400.
P43) Burlamaqui, Aquiles M. F. ; Azevedo, Samuel O. ; Dantas, Rummenigge Rudson ;
Schneider, Claudio A. ; Xavier, Josivan S. ; Melo, Julio C. P. ; Gonçalves, Luiz M. G. ; Filho,
Guido L. S. ; de Oliveira, Jauvane C. (2009) The H-N2N framework: towards providing
interperception in massive applications. Multimedia Tools and Applications, v. 45, p. 215-245.
P44) Ahmed, D.T.; Shirmohammadi, S.; Oliveira, J.C. A hybrid P2P communications
architecture for zonal MMOGs. Multimedia Tools and Applications, v. 45, p. 313-345, 2009.
P45) Shirmohammadi, S.; Kazem, I.; Ahmed, D.T.; El-Badaoui, M.; Oliveira, J.C. A VisibilityDriven Approach for Zone Management in Simulations. Simulation (San Diego), v. 84, p. 215229, 2008.
P46) Malfatti, S.M. ; Santos, S.R.; Fraga, L.M.; Justel, C.M.; Rosa, P.F.F.; Oliveira, J.C. The
Design of a Graphics Engine for the Development of Virtual Reality Applications. Revista de
Informática Teórica e Aplicada, v. XV, p. 25-45, 2008.
P47) Dias, R.D.M.; Freire, S.M. Conceitos demográficos e suas representações nos Sistemas
de Informação em Saúde. Accepted in 2010 for publication in Cadernos Saúde Coletiva
(Federal University of Rio de Janeiro).
P48) Carrano, R.; Magalhães, L.C.S.; Saade, D.C.M.; Albuquerque, C.V.N.. IEEE 802.11s
Multihop MAC: a Tutorial. Accepted in 2009 for publication in IEEE Communications Surveys
and Tutorials. To be published in January 2011.
P49) Teixeira, I.M.; Vicoso, R.P.; Correa, B.S.P.M.; Gomes, A.T.A.; Ziviani, A. Suporte Remoto
ao Atendimento Médico Emergencial via Dispositivos Móveis. REIC. Revista Eletrônica de
Iniciação Científica (Online), v. III, p. 1, 2009.
P50) Martins, F.S., Andrade, R.M.C., Santpos, A.L., Schulze, B., de Souza, J.N., Detecting
misbehaving units on computational grids. Concurrency and Computation. Practice &
Experience. , v. 22, p. 329-342, 2010.
P51) Schulze, B., Fox, G.C., Special Issue: Advanced Scheduling Strategies and Grid
Programming Environments. Concurrency and Computation. Practice & Experience. , v. 22, p.
233-240, 2010.
P52) Mury, A.R., Schulze, B., Gomes, A.T.A., Task distribution models in grids: towards a
profile-based approach. Concurrency and Computation. Practice & Experience, v. 22, p. 358374, 2010.
P53) Madeira, E.R.M., Schulze, B., Managing Networks and Services of the Future. Journal
of Network and Systems Management. , v. 17, p.1-4, 2009.
P54) Schulze, B., Rana, O., et al., Special Issue: Advanced Strategies in Grid EnvironmentsModels and Techniques for Scheduling and Programming. Concurrency and Computation.
Practice & Experience. , v. 21, p. 1667-1671, 2009.
P55) Cirne, W., Schulze, B., Special Issue: The Best of CCGrid'2007: A Snapshot of an
“Adolescent” Area. Concurrency and Computation. Practice & Experience. , v. 21, p. 257-263,
2009.
101
5.2) Book chapters
B1) Ricardo da Silva Santos, Fabio Antero Pires, Marco Antonio Gutierrez. Chapter:
Mineração de Dados em Bases Assistenciais. In: Marcelo Eidi Nita, Antonio Carlos Coelho
Campino, Silvia Regina Secoli, Flávia Mori Sarti, Moacyr Roberto Cuce Nobre. (Org.).
Avaliação de Tecnologias em Saúde: Evidência Clínica, Análise Econômica e Análise de
Decisão. 1 ed. São Paulo, Brasil: Artmed Editora S.A., 2009, v. 1, p. 96-115.
B2) Fancello, EA, Dallacosta, D, Roesler, CRM. Numerical simulation of bone remodeling
process considering interface tissue differentiation in total hip replacements, to be published
in the book Biomechanics of Hard Tissues, 2010 by Wiley-VCH in Germany. Editor, Andreas
Öechsner.
B3) Machado, L.S.; Siscoutto, R.A. (2010) (Org.) Tendências e Técnicas em Realidade Virtual
e Aumentada. SBC. 101p.
B4) Machado, Liliane S.; Siscoutto, Robson A. (2010) (Org.) . Tendências e Técnicas em
Realidade Virtual e Aumentada. Porto Alebre: SBC. v. 1. 101 p. In Portuguese.
B5) Machado, Liliane S.; Moraes, Ronei M. (2010) Intelligent Decision Making in Training
Based on Virtual Reality. In: Da Ruan. (Org.). Computational Intelligence in Complex Decision
Systems. Paris: Atlantis Press.
B6) Machado, Liliane S. (2010) Dispositivos não-convencionais para interação e imersão em
realidade virtual e aumentada. In: José R. F. Brega, Judith Kelner. (Org.) Interação em
realidade virtual e realidade aumentada. Canal 6, p. 23-33. In Portuguese.
B7) Nunes, F.; Machado, L.S.; Costa, Rosa M.E.M. (2009) RV e RA Aplicadas à Saúde. Book
Chapter. In: Rosa Costa e Marcos Wagner. (Org.). Aplicações de Realidade Virtual e
Aumentada. Porto Alegre: SBC, p. 69-89. In Portuguese.
B8) Machado, L.S.; Moraes, R.M.; Nunes, F. (2009) Serious Games para Saúde e
Treinamento Imersivo. Book Chapter. In: Fátima L. S. Nunes; Liliane S. Machado; Márcio S.
Pinho; Cláudio Kirner. (Org.). Abordagens Práticas de Realidade Virtual e Aumentada. Porto
Alegre: SBC, p. 31-60. In Portuguese.
B9) NUNES, F.L.S.; CORRÊA, C.G. (2010) Interação com Java3D In: Interação com
Realidade Virtual e Realidade Aumentada.1 ed.Bauru (SP) : Canal 6, v.1, p. 105-118.
B10) NUNES, F.L.S.; DELAMARO, M.E. (2010) Recuperação de imagens baseada em
conteúdo e sua aplicação na área de saúde In: Computer on the Beach 2010 – Minicourse
Book 1, v.1, p. 115-144.
B11) CORRÊA, C.G., NUNES, F.L.S. (2009) Interação com dispositivos convencionais e não
convencionais utilizando integração entre linguagens de programação In: Abordagens
práticas de realidade virtual e aumentada 1, ed.Porto Alegre (RS), Sociedade Brasileira de
Computação, v.1, p. 61-103.
B12) NUNES, F.L.S.; Machado, L.S.; COSTA, R.M.E.M. (2009) Realidade Virtual e Realidade
Aumentada aplicadas à Saúde In: Aplicações de Realidade Virtual e Aumentada.1 ed.Porto
Alegre (RS) : Sociedade Brasileira de Computação, v.1, p. 69-89.
B13) NUNES, F.L.Santos, BALANIUK, R. (2008) Realidade Virtual aplicada a saúde conceitos e situação atual In: Informática em Saúde.1 ed.Brasília/Londrina : Editora Universa
102
(UCB) / Editora da Universidade Estadual de Londrina, 2008, v.1, p. 325-355.
B14) A.Godlman, B.Schulze, Anais do VIII Workshop em Clouds, Grids e Aplicações - WCGA
10. Porto Alegre : SBC, 2010, v.1. p.157.
B15) B. Schulze, G.C.Fox, Concurrency and Computation: Practice and Experience - Special
Issue: Advanced Scheduling Strategies and Grid Programming Environments, 2010 p.160.
B16) B Schulze, J Myers; 'Proceedings of the 7th International Workshop on Middleware for
Grids, Clouds and e-Science (MGC)'; ACM; 2009
B17) B Schulze, A R Mury; 'Proceedings of the 3rd Intl. Latin American Grid Workshop
(LAGrid)'; SBC; 2009
B18) B Schulze, J N de Souza; Anais VII Workshop de Computação em Grade e Aplicações.
Recife – PE, SBC, 2009
B19) B Schulze, O Rana, et al., Concurrency and Computation: Practice and Experience Special Issue: Advanced Strategies in Grid Environments, 2009 p.89
B20) B Schulze, W Cirne, Concurrency and Computation: Practice and Experience - Special
Issue: The Best of CCGrid: A Snapshot of an Adolescent Area, 2009
B21) B Schulze, E R M Madeira, Journal of Network and Systems Management - Special
Issue on Selected extended papers of LANOMS, 2009
B22) CARRANO, R. ; SAADE, Debora Christina Muchaluat ; CAMPISTA, M. E. M. ;
MORAES, I. M. ; ALBUQUERQUE, Celio Vinicius Neves de ; MAGALHÃES, Luiz Claudio
Schara ; RUBINSTEIN, M. G. ; COSTA, L. H. M. K. ; DUARTE, O. C. M. B. . Multihop MAC:
IEEE 802.11s Wireless Mesh Networks. In: Dharma Agrawal; Bin Xie. (Org.). Encyclopedia on
Ad Hoc and Ubiquitous Computing: Theory and Design of Wireless Ad Hoc, Sensor, and
Mesh Networks. 1 ed. Cingapura: World Scientific Publishing, 2009, p. 501-532.
5.3) Publications in conference proceedings
C1) Reis Golbert, D, Blanco, PJ, Feijóo, RA. A Lattice-Boltzmann model for simulating the
blood flow in large vessels, First Brazil-China Conference on Scientific Computing -BCSC
2009-, Petrópolis, Brazil, September 21-25, 2009
C2) Reis Golbert, D, Blanco, PJ, Feijóo, RA. Lattice-Boltzmann simulations in computational
hemodynamics, Congresso Ibero-Latino-Americano de Métodos Computacionais em
Engenharia -CILAMCE 2009-, Búzios, Brazil, November 8-11, 2009.
C3) Camargo, E, Blanco, PJ, Feijóo, RA, Silva, RLS. Efficient implementation for particle
tracing in computational hemodynamics. Congresso Ibero-Latino-Americano de Métodos
Computacionais em Engenharia -CILAMCE 2009-, Búzios, Brazil, November 8-11, 2009.
C4) Ziemer, PGP, Collares, M, Camargo, E, Castellani de Freitas, I, Blanco, PJ, Feijóo, RA.
ImageLab: Um sistema multi-orientado na visualização e processamento de imagens
médicas. Congresso Ibero-Latino-Americano de Métodos Computacionais em Engenharia CILAMCE 2009-, Búzios, Brazil, November 8-11, 2009.
C5) Blanco, PJ, Pivello, MR, Urquiza, SA, Feijóo, RA. Building coupled 3D-1D-0D models in
computational hemodynamics. 1st International Conference on Mathematical and
103
Computational Biomedical Engineering -CMBE 2009-, Swansea, Wales, June 29-July 1,
2009.
C6) Buscaglia, GC, Leiva, JS, Blanco, PJ. Iterative strong coupling of dimensionallyheterogeneous models. XVIII Congreso sobre Métodos Numéricos y sus Aplicaciones -ENIEF
2009-, Tandil, Argentina, November 3-6, 2009.
C7) Reis Golbert, D, Blanco, PJ, Feijóo, RA. Simulation of 2D and 3D incompressible fluid
flow via a Lattice-Boltzmann model. International Conference on Particle-Based Methods PARTICLES 2009-, Barcelona, Spain, November 25-27, 2009.
C8) Blanco, PJ, Leiva, JS, Buscaglia, GC. Partitioned analysis of dimensionallyheterogeneous models for the Navier-Stokes equations. SIAM Conference on Analysis of
Partial Diferential Equations -PD09-, Miami, United States, December 7-11, 2009.
C9) Blanco, PJ, Feijóo, RA. Sensitivity analysis for dimensionally-heterogeneous models. 8th
World Congress on Structural and Multidisciplinary Optimization -WCSMO-8-, Lisbon,
Portugal, June 1-5, 2009.
C10) Blanco, PJ, Buscaglia, GC, Leiva, JS. Iterative strong coupling of dimensionallyheterogeneous models. VIII Workshop on Partial Differential Equations -WPDE 2009-, Rio de
Janeiro, Brazil, November 3-6, 2009.
C11) Ziemer, PGP, Costa, RG, Blanco, PJ, Schulze, BR, Feijóo, RA. Porting a hemodynamics
simulator for a grid computing environment. VIII Workshop de Computação em Clouds, Grids
e Aplicações -WCGA 2010-, Gramado, Brazil, May 24-28, 2010.
C12) Queiroz, RAB, Giraldi, GA, Blanco, PJ, Feijóo, RA. Determining optical flow using a
modified Horn and Schunck's algorithm. 17th International Conference on Systems, Signals
and Image Processing, -IWSSIP 2010-, Rio de Janeiro, Brazil, June 17-19, 2010.
C13) Feijóo, RA, Blanco, PJ. The role of the variational formulation in the hetero-dimensional
and multiscale modeling of the cardiovascular human system. IV International Symposium on
Modelling of Physiological Flows -MPF 2010-, Chia Laguna, Italy, June 2-5, 2010.
C14) Blanco, PJ, Feijóo, RA. Coupled heterogeneous models accounting for arterial-venous
circulation: monolithic and iterative approaches. IV International Symposium on Modelling of
Physiological Flows -MPF 2010-, Chia Laguna, Italy, June 2-5, 2010.
C15) Blanco, PJ, Discacciati, M, Quarteroni, A. A domain decomposition framework for
modeling dimensionally heterogeneous problems. Workshop on Domain Decomposition
Solvers for Heterogeneous Field Problems -DDHF 2010-, Hirschegg, Austria, June 2-6, 2010.
C16) BONILLA, D. ; VELHO, L. ; NACHBIN, A. ; NONATO, L. G. . Fluid Warping. In: IV
IBEROAMERICAN SYMPOSIUM IN COMPUTER GRAPHICS - SIACG, 2009, MARGARITA
ISLAND. Proceedings SIACG’09, 2009. p. 1-6.
C17) Martins, F. P., Cuminato, J.A., OISHI, C.M., QUEIROZ, R. B., FERREIRA, V. G., “Uma
abordagem implicita para simular escoamentos viscoelasticos com superficies livres usando
modelos do tipo POM-POM”, Congresso de Metodos Numericos en Engenieria, 2009,
Barcelona - Espanha.
C18) Lima, G. A. B., FERREIRA, V. G., QUEIROZ, R. B., Candezano, M.A.C., Correa, L.,
“Development and evaluation of two new upwind schemes for conservation laws”, 20th
104
International Congress of Mechanical Engineering - Cobem 2009, 2009, Gramado - RS.
C19) Lima, G. A. B., QUEIROZ, R. B., FERREIRA, V. G., “A TVD-Based Upwinding scheme
for compressible and incompressible flows”, Congresso Ibero-Latino-Americano de Métodos
Computacionais em Engenharia - 30o. CILAMCE, 2009, Búzios - RJ.
C20) Lima, G. A. B. e FERREIRA, V. G., Sobre Variação Total e Convergência de Três
Esquemas Upwind para Leis de Conservação, XXXII Congresso Nacional de Matemática
Aplicada e Computacional - CNMAC 2009, 2009, Cuiabá - MT.
C21) Lima, G. A. B. e FERREIRA, V. G., “Uma avaliação computacional de três esquemas de
discretização "upwind" para leis de conservação não-lineares”, Brazilian Conference on
Dynamics, Control and Applications - DINCON, 2009, 2009, Bauru - SP.
C22) Barbosa, Fernanda Paula; Pola, Ives Renê Venturini; Mangiavacchi, Norberto; Castelo,
Antonio,
A
NUMERICAL
METHOD
FOR
SOLVING
THREE-DIMENSIONAL
INCOMPRESSIBLE FLUID FLOWS FOR HYDROELECTRIC RESERVOIR APPLICATIONS,
20th International Congress of Mechanical Engineering, November 15-20, 2009, Gramado,
RS, Brazil.
C23) MACEDO, I. ; GOIS, J. P. ; VELHO, L. . Hermite Interpolation of Implicit Surfaces with
Radial Basis Functions. In: Brazilian Symposium on Computer Graphics and Image
Processing, 2009, Rio de Janeiro. 22th SIBGRAPI, 2009.
C24) JAI, M. ; CIUPERCA, I. ; BUSCAGLIA, G. C. ; EL ALAOUI, M. . Topological asymptotic
expansions for a nonlinear elliptic equation with small inclusions. Application to the general
compressible Reynolds equation.. In: 3rd International Conference on Approximation Methods
and Numerical Modelling in Environment and Natural Resources, 2009, Pau, França.
Proceedings of MAMERN’09, 2009.
C25) BUSCAGLIA, G. C. ; RANGARAJAN, R. ; LEW, A. J. . Immersed boundaries without
boundary locking: A DG-based approach. In: Academy Colloquium on Immersed Boundary
Methods: Current Status and Future Research Directions, 2009, Amsterdam, Holanda.
Academy Colloquium on Immersed Boundary Methods. Amsterdam : Royal Academy of
Science, The Netherlands, 2009. v. 1. p. 20-21.
C26) Roberto F. Ausas, Enzo A. Dari e Gustavo C. Buscaglia. UNA FORMULACION
MONOLITICA PARA FLUJOS A SUPERFICIE LIBRE CON CALCULO NUMERICO DEL
JACOBIANO. Mecánica Computacional Vol XXVIII, pp. 1391-1407 (artigo completo).
Apresentado no congreso ENIEF’2009, Tandil, Argentina, novembro de 2009.
C27) Roberto F. Ausas, Enzo A. Dari e Gustavo C. Buscaglia. OPCIONES EN LA
FORMULACION POR ELEMENTOS FINITOS PARA LA FUERZA DE TENSION
SUPERFICIAL. Mecánica Computacional Vol XXVIII, pp. 1371-1389 (artigo completo).
Apresentado no congreso ENIEF’2009, Tandil, Argentina, novembro de 2009.
C28) SOUSA, F. S., AUSAS, R. F., BUSCAGLIA, G. C. An improved finite element space with
embedded discontinuities. In: XXX Iberian Latin-American Congress on Computational
Methods in Engineering, 2009, Armação dos Búzios - RJ. Proceedings of XXX CILAMCE. v.1.
p.1 - 11, 2009.
C29) SOUSA, F. S., AUSAS, R. F., BUSCAGLIA, G. C. Improved interpolants for
discontinuous pressures. In: XVIII Congreso sobre Métodos Numéricos y sus Aplicaciones,
105
2009, Tandil, Argentina. Mecánica Computacional. AMCA, v.28. p.1131 - 1148, 2009.
C30) SILVA, A. A. N., SOUSA, F. S. Simulação numérica de escoamentos bidimensionais
com superficies livres e ângulo de contato. In: XXX Iberian Latin-American Congress on
Computational Methods in Engineering, 2009, Armação dos Búzios - RJ. Proceedings of XXX
CILAMCE. v.1. p.1 - 14, 2009.
C31) PETRI, L. A., OISHI, C. M., SOUSA, F. S., BUSCAGLIA, G. C. Sobre a escolha de
métodos para escoamentos incompressíveis em microescala. In: XXX Iberian Latin-American
Congress on Computational Methods in Engineering, 2009, Armação dos Búzios - RJ.
Proceedings of XXX CILAMCE. v.1. p.1 - 13, 2009.
C32) PINHO, R. D. ; LOPES, A. A. ; OLIVEIRA,M. C. F. . Incremental Board: A Grid-based
Space for Visualizing Dynamic Data Sets. In: ACM Symposium on Applied Computing,
Multimedia and Visualization track, 2009, Honolulu, Hawaii. Proceedings 24th Annual ACM
Symposium on Applied Computing, 2009. v. 1. p. 1757-1764.
C33) PINHO, R. D. ; OLIVEIRA,M. C. F. . HexBoard: conveying pairwise similarity in a
incremental visualization space. In: XIII International Conference on Information Visualization
(IV09), 2009, Barcelona. Proceedings 13th. International Conference on Information
Visualization (IV09). Los Alamitos, CA : IEEE Computer Society Press, 2009. v. 1. p. 32-37.
C34) ELER, D. M. ; PAULOVICH, F.V. ; OLIVEIRA, M. C. F. ; MINGHIM, R. . Topic-based
coordination for visual analysis of evolving document collections. In: xIII International
Conference on Information Visualization/7th International Symposium on Coordinated &
Multiple Views in Visualisation & Exploration, 2009, Barcelona. Proceedings 13th International
Conference on Information Visualization (IV09). Los Alamitos, CA : IEEE Computer Society
Press, 2009. v. 1. p. 149-155.
C35) Watanabe, L. S. ; Franchin, W. ; Levkowitz, H. ; MINGHIM, R. . Development,
Implementation, and Evaluation of Sonification Tools for Point-and-Surface-Based Data
Exploration. In: 13th International Conference on Information Visualisation, 2009, Barcelona.
Proceedings of the 13th International Confetence on Information Visualization, 2009. p. 3-9.
C36) Gonçalves,W. N. ; Machado, B. B. ; BATISTA NETO, J. E. S. ; BRUNO, Odemir M . A
Complex Network Approach to Texture Applied to Medical Image Classification. In: II
ECCOMAS Thematic Conferences on Computational Vision and Medical Image Processing,
2009, Porto. Anais do II ECCOMAS Thematic Conferences on Computational Vision and
Medical Image Processing, 2009.
C37) Carlos da Silva Santos, Luis Roberto Pereira de Paula, Marco Antonio Gutierrez, Marina
S. Rebelo, Roberto Hirata Jr. MIV: A Cardiac Image Visualizer. In Proceedings of Sibgrapi
2009. PUC Rio de Janeiro, October 11-14, 2009.
C38) Danilo M Lage, Jeanne M Tsutsui, Sérgio Shiguemi Furuie. Epicardial Coronary
Angiography from Microbubble-Based Tridimensional Echocardiography: A Feasibility Study.
In: Computers in Cardiology 13-16 Sep 2009, v. 36, p. 777-780. Park City, Utah September
13-16, 2009.
C39) Fernando JR Sales, JLAA Falcão, BAA Falcão, Sergio S Furuie, Pedro A Lemos.
Estimation of Coronary Atherosclerotic Plaque Composition Based Only on GreyScale
Intravascular Ultrasound Images. In: Computers in Cardiology 13-16 Sep 2009, v. 36, p. 645106
648. Park City, Utah September 13-16, 2009.
C40) Maurício Higa, Paulo Eduardo Pilon, Silvia G Lage, Marco Antonio Gutierrez. A
Computational Tool for Quantitative Assessment of Peripheral Arteries in Ultrasound Images.
In: Computers in Cardiology 13-16 Sep 2009, v. 36, p. 41-44. Park City, Utah September 1316, 2009.
C41) Monica M. S. Matsumoto, Pedro Lemos, Sérgio S. Furuie, IVUS coronary volume
alignment for distinct phases. In: Medical Imaging 2009:Ultrasonic Imaging and Signal
Processing, 2009, Orlando. Proc. of SPIE, 2009. v. 7265. p. 72650X-1-72650X-7.
C42) Ramon Alfredo Moreno, Marco Antonio Gutierrez, Rita Porfirio. A prototype for medical
image processing using EELA-2 infrastructure. EELA-2 Conference. Choroni Venezuela 2527 novembro, 2009.
C43) SELKE, A. ; FANCELLO, E. A. ; STAINIER, Laurent . A variational formulation for a set
of hyperelastic-viscoplastic material models in a fully coupled thermomechanical problem. In:
Pan-American Congress of Applied Mechanics, 2010, Foz do Iguaçú. 11 - Pan-American
Congress of Applied Mechanics, 2010. v. 1.
C44) VASSOLER, Jakson Manfredini ; REIPS, L. ; FANCELLO, E. A. . A variational
viscoelastic framework for fiber reinforced soft tissues.. In: 11th Pan-American Congress of
Applied Mechanics, 2010, Foz do Iguaçú. 11th Pan-American Congress of Applied
Mechanics, 2010. v. 1. p. 1-10.
C45) REIPS, L. ; VASSOLER, Jakson Manfredini ; FANCELLO, E. A. . A variational
viscoelastic framework for fiber reinforced soft tissue. In: International Conference on Tissue
Engineering, 2009, Leiria. International Conference on Tissue Engineering, 2009. v. 1. p. 115.
C46) SELKE, A. ; FANCELLO, E. A. ; STAINIER, Laurent . Variational constitutive updates for
a fully coupled thermo-mechanical problem. In: 20th International Congress of Mechanical
Engineering, 2009, Gramado. 20th International Congress of Mechanical Engineering, 2009.
v. 1. p. 1-15.
C47) VASSOLER, Jakson Manfredini ; REIPS, L. ; FANCELLO, E. A. . Variational viscoelastic
models for fiber reinforced soft tissues. In: 20th International Congress of Mechanical
Engineering, 2009, Gramado. 20th International Congress of Mechanical Engineering, 2009.
v. 1. p. 1-10.
C48) ROESLER, Carlos Rodrigo de Mello ; BARBI, J. C., LOPES, M., MORÉ, A. D. O.
Análise Comparativa de Diferentes Acabamentos Superficiais Metálicos Utilizados em
Endopróteses. In: II Encontro Nacional de Engenharia Biomecânica, Florianópolis, 2009.
C49) ROESLER, Carlos Rodrigo de Mello; CAMINHA, I. M., KEIDE, H., DALLACOSTA, D.,
GUIMARÃES ETO, A.C., Influência das Diferentes Densidades de Espumas Rígidas nos
Resultados de Ensaios de Inserção e Remoção de Parafusos Ósseos. In: V Congresso
Brasileiro de Metrologia, Salvador, 2009
C50) DORNELLES, Mauro Fagundes, MORÉ, A.D.O., ROESLER, C.R.M., Resistência
Mecânica de Fixações Ligamentares do Joelho. In: II Encontro Nacional de Engenharia
Biomecânica, Florianópolis, 2009.
107
C51) MEDEIROS, Carolina Brum, CARVALHO, J.M., MORAES, V.M., DALLACOSTA, D.,
BENTO, D.A., ROESLER, C.R.M. Sistema de medição de temperatura sem fio para análise
da geração de calor em próteses articulares. In: II Encontro Nacional de Engenharia
Biomecânica, Florianópolis, 2009.
C52) GUIMARÃES NETO, Antônio Carlos, ROESLER, C.R.M., VASSOLER, J.M.,
FANCELLO, E.A., Parameters Identification on Stress x Strain Curve of Polimeric Materials –
Sensitivity Analysis. In: 2009 ESSS South American ANSYS Users Conference, Florianópolis,
2009.
C53) MACHADO, Renato Reis, KOCH, C.A., MARTINS, A.R., ROESLER, C.R.M., CAMINHA,
I. M., Identification of the Parameters That Influence the Uncertainty Sources on Orthophaedic
Implants Fatigue Tests. In: XIX IMEKO World Congress – Fundamental and Applied
Metrology, Lisbon, Portugal, 2009.
C54) SEIXAS, Flávio Luiz; SAADE, Débora Christina Muchaluat ; CONCI, A. ; Souza, Andrea
Silveira; Tovar-Moll, Fernanda; Bramatti, Ivanei. Anatomical Brain MRI Segmentation
Methods: Volumetric Assessment of the Hippocampus. In: IWSSIP 2010 - 17th International
Conference on Systems, Signals and Image Processing, Rio de Janeiro, 2010.
C55) CONCI, A. ; PLASTINO, A. ; SOUZA, A. S. ; KUBRUSLY, C. S. ; SAADE, Débora
Christina Muchaluat ; SEIXAS, Flávio Luiz . Automated Segmentation and Clinical Information
on Dementia Diagnosis. In: International Workshop on Medical Image Analysis and
Description for Diagnosis Systems (MIADS) em conjunto com International Joint Conference
on Biomedical Engineering Systems and Technologies (BIOSTEC), 2009, Porto. Proceedings
of the International Workshop on Medical Image Analysis and Description for Diagnosis
Systems, 2009. p. 33-42.
C56) Ladjane Coelho dos Santos, Luciete Alves Bezerra, Thiago Leite Rolim, Paulo Roberto
Maciel Lyra, Marcus Costa de Araújo, Ewerton Diego Castro Silva, Aura Conci e Rita de
Cássia Fernandes de Lima, DESENVOLVIMENTO DE FERRAMENTA COMPUTACIONAL
PARA ANÁLISE PARA-MÉTRICA DA INFLUÊNCIA DA POSIÇÃO E DO TAMANHO DO
TUMOR DE MAMA EM PERFIS DE TEMPERATURA, CIBIM 9, 9º Congreso Iberoamericano
de Ingeniería Mecánica , Las Palmas, Gran Canaria, de 17 a 20 de novembro de 2009.
C57) Tiago Bonini Borchartt UFSM, Marcos Cordeiro d’Ornellas – UFSM, Aura Conci – UFF,
Alicia Del Carmen Becerra Romero USP e Paulo Henrique Pires de Aguiar - USP,ON A NEW
USE OF AUTOMATIC MORPHING TECHNIQUES: TO CORRECT DISTORTION OF
ENDOSCOPIC SYSTEMS, presented in CILAMCE 2009, 8-11 Novembro, Búzios, RJ.
C58) Luciano Oliveira Junior, Aura Conci (UFF), On the possibility of fingerprint identification
by pores detection in 500 dpi images-SIBGRAPI 2009 - XXIInd Brazilian Symposium on
Computer Graphics and Image Processing, Pontifícia Universidade Católica do Rio de
Janeiro (PUC-Rio), October 11th and 14th, - extended abstracts , poster. 2p.:
C59) Esteban Clua, Anselmo Montenegro, Micheli Andrade, Aura Conci, An automatic method
of applying color in digital images,-SIBGRAPI 2009 - XXIInd Brazilian Symposium on
Computer Graphics and Image Processing, Pontifícia Universidade Católica do Rio de
Janeiro (PUC-Rio), October 11th and 14th, - extended abstracts , poster. 2p.:
C60) Tiago Borchartt, Aura Conci, Marcos d Ornellas, A warping based approach to correct
distortions in endoscopic images -SIBGRAPI 2009- XXIInd Brazilian Symposium on
108
Computer Graphics and Image Processing, Pontifícia Universidade Católica do Rio de
Janeiro (PUC-Rio), October 11th and 14th, - extended abstracts , poster. 2p.:
C61) Rodrigo Serrano, Leonardo Motta, Monica Batista , Aura Conci (UFF),Using a new
method in thermal images to diagnose early breast diseases -SIBGRAPI 2009 - extended
abstracts - XXIInd Brazilian Symposium on Computer Graphics and Image Processing,
Pontifícia Universidade Católica do Rio de Janeiro (PUC-Rio), October 11th and 14th, poster.
2p.:
C62) Victor Oliveira, Aura Conci (UFF), Skin detection using HSV color space -SIBGRAPI
2009 - XXIInd Brazilian Symposium on Computer Graphics and Image Processing, Pontifícia
Universidade Católica do Rio de Janeiro (PUC-Rio), October 11th and 14th, - extended
abstracts , poster. 2p.:
C63) J. R. Bokehi, N. C. M. Vasconcellos and A. Conci, Use of Coherence Measurements
between EEG and EMG on Identification of the Myoclonus Locus, Paper ID: 914852,
presented in Session: Image processing for medical applications, 4pp. IWSSIP 2009: 16th
International Workshop on Systems, Signals and Image Processing for the year 2009,
organized by the Technological Educational Institute of Chalkida, 18-20 June.
C64) Otton Teixeira da Silveira Filho Aura Conci Rodrigo Carvalho Rafael Mello Rita Lima,
Paper ID: 549374 On Using Lacunarity for Diagnosis of Breast Diseases Considering Thermal
Images, presented in Session: Image processing for medical applications, 4pp. IWSSIP
2009:16th International Workshop on Systems, Signals and Image Processing for the year
2009, organized by the Technological Educational Institute of Chalkida, 18-20 June.
C65) Conci, C.S. Kubrusly and Thomas Walter Rauber, Influence of the Wavelet Family in the
Compression-Denoising Technique on Synthetic and Natural Images, Paper ID: 26455,
presented in section Session: Image processing, 4pp. IWSSIP2009:16th International
Workshop on Systems, Signals and Image Processing for the year 2009, organised by the
Technological Educational Institute of Chalkida, 18-20 June.
C66) Aura Conci; Marcello Fonseca; Carlos Kubrusly and Thomas Raubert, "CONSIDERING
THE WAVELET TYPE AND CONTENTS ON THE COMPRESSION DECOMPRESSION
ASSOCITED WITH IMPROVEMENT OF BLURRED IMAGES", Paper number: 273 , in
Proceedings of the International Joint Conference on Computer Vision, Imaging and
Computer Graphics Theory and Applications – VISIGRAPP/ VISAPP , vol II, pp. 79-84 (CDROM) , 5 - 8 February, 2009 – Lisbon, Portugal. ISBN 978-989-8111-74-6, presented at
paralllel section 4- Image Formation and Processing, Book of Abstracts p. 75, INSTICC
PRESS, Lisboa, Portugal, in cooperation with ACM Siggraph and Eurographics.
C67) Rodrigo Carvalho Serrano, Marcelo Zamith, Michele Knechtel, Anselmo Montenegro,
Esteban Walter Gonzalez Clua, Aura Conci, Luciete A. Bezerra, Rita de Cássia F. de Lima
Reconstruindo as imagens termográficas a partir dos arquivos JPEG em false color para
auxilio no diagnostico mastologico II Encontro Nacional de Engenharia Biomecânica: ENEBI
2009,de 6 a 8 de maio - Florianopolis- SC - p. 83-84 do CD-ROM do evento.
C68) Felipe J. Castro, Simone Vasconselos, Rodrigo Carvalho Serrano, Leonardo Soares
Motta, Pedro Martins Menezes, Luciete A. Bezerra, Rita de Cássia Fernandes de Lima, Aura
ConciUm sistema para pré processamento de imagens térmicas e modelagem tridimensional
aplicadas à Mastologia. - trab 104- II Encontro Nacional de Engenharia Biomecânica: ENEBI
109
2009,de 6 a 8 de maio - Florianopolis- SC - p. 104-105 CD-ROM do evento.
C69) GULIATO, D. ; SANTOS, Jean Carlo de Souza . Granular Computing and Rough Sets to
Generate Fuzzy Rules. In: 6th International Conference on Image Analysis and Recognition,
2009, Halifax. Lecture Notes in Computer Science - Image Analysis and Recognition. Berlin :
Springer-Verlag, 2009. p. 317-326.
C70) SANTOS, Jean Carlo de Souza ; GULIATO, D. . Proposta de um método para geração
automatica de regras fuzzy baseada na teoria dos rough sets. In: XXXV Latin American
Informatics Conference - CLEI, 2009, Pelotas -RS. XXXV Latin American Informatics
Conference - CLEI. Pelotas - RS, 2009. v. CD-ROM. p. 1-8.
C71) Genari, A.C. ; GULIATO, D.. Similarity Measures based on Fuzzy Sets. In: XXII Brazilian
Symposium on Computer Graphics and Image Processing SIBGRAPI, 2009, Rio de Janeiro RJ. XXII Brazilian Symposium on Computer Graphics and Image Processing SIBGRAPI. Rio
de Janeiro, 2009. v. CD-ROM. p. 1-2.
C72) GIRALDI, G. A. ; NEVES, L. A. P. ; P. H. M. Lira . An Automatic Morphometrics Data
Extraction Method in Dental X-Ray Image. In: International Conference on Biodental
Engineering - BIODENTAL 2009, 2009, Porto. Proc. of the International Conference on
Biodental Engineering, 2009.
C73) GIRALDI, G. A. ; NEVES, L. A. P. ; Adriana Costa ; OLIVEIRA, D. E. M. ; KUCHLER, E.
C. . Automatic Data Extraction in Odontological X-Ray Imaging. In: . IMAGAPP - International
Conference on Imaging Theory and Applications, 2009, Lisboa. Proc. of the Interrnational
Conference on Imaging Theory and Applications, 2009. v. 1. p. 141-144.
C74) C.E. Thomaz ; GIRALDI, G. A. . A Kernel Maximum uncertainty Discriminant Analysis
and its Application to Face Recognition. In: International Joint Conference on Computer Vision
and Computer Graphics Theory and Applications, 2009, Lisboa. Proc. of the International
Joint Conference on Computer Vision and Computer Graphics Theory and Applications, 2009
C75) P. H. M. Lira ; GIRALDI, G. A. ; NEVES, L. A. P. . Panoramic Dental X-Ray Image
Segmentation and Feature Extraction. In: V Workshop de Visão Computacional (WVC 2009),
2009, São Paulo. Proc. of the WVC 2009, 2009. v. 1.
C76) Danubia de Araujo Machado ; GIRALDI, G. A. ; A.A. Novotny . Segmentation Approach
Based on Topological Derivative and Level Set. In: 17th International Workshop on Systems,
Signals and Image Processing, 2010, Rio de Janeiro. Selected for a Special Issue for the
Journal “Integrated Computer-Aided Engineering (ICAE)”.
C77) NEVES, L. A. P. ; GIRALDI, G. A. . An On-Line Medical Imaging Management System
for Shared Research in the Web, using Pattern Features. In: Workshop de Visao
Computacional - WVC 2010, 2010, Presidente Prudente. Proc. of the WVC 2010, 2010.
C78) Douglas E. M. Oliveira, Fabio Porto, Gilson A. Giraldi, Bruno R. Schulze, Raquel C. G.
Pinto. “QEF - A Query Processing System for Scientific Visualization in Grids”. Submitted to
SBBD 2010
C79) RODRIGUES, Paulo Sérgio ; GIRALDI, G. A. . Computing the q-index for Tsallis
Nonextensive Image Segmentation. In: XXIIth Brazilian Symposium on Computer Graphics
and Image Processing, 2009, Rio de Janeiro.
110
C79) Santos, Alysson D.; Machado, Liliane S.; Moraes, Ronei M.; Gomes, Renata G. S.
(2010) Avaliação baseada em lógica fuzzy para um framework voltado à construção de
simuladores baseados em RV. In: XII Symposium on Virtual and Augmented Reality 2010,
Natal. Proc. of XII Symposium on Virtual Reality. p. 194-202. In Portuguese.
C80) Moraes, R.M.; Machado, L.S. (2009) Fuzzy Continuous Evaluation in Training Systems
Based on Virtual Reality. In: Proc. of 2009 IFSA World Congress . Lisbon. p. 102-107.
C81) Machado, L.S.; Moraes, R.M. (2009) Continuous Evaluation in Training Systems Based
on Virtual Reality Using Fuzzy Rule Based Expert Systems. In: Proc. International
Conference on Engineering and Computer Education (ICECE2009). Buenos Aires, Argentina.
C82) Moraes, R.M.; Machado, L.S.; Souza, L.C. (2009) Online Assessment of Training in
Virtual Reality Simulators Based on General Bayesian Networks. In: Proc. International
Conference on Engineering and Computer Education (ICECE2009). Buenos Aires, Argentina.
C83) Moraes, R.M.; Machado, L.S. (2009) Another Approach for Fuzzy Naive Bayes Applied
on Online Training Assessment in Virtual Reality Simulators. In: Proc. Safety, Health and
Environmental World Congress (SHEWC'2009), Mongaguá, Brazil. p. 62-66.
C84) Moraes, R.M.; Machado, L.S. (2009) Online Training Evaluation in Virtual Reality
Simulators Using Possibilistic Networks. In: Proc. Safety, Health and Environmental World
Congress (SHEWC'2009), Mongaguá, Brazil. pp. 67-71.
C85) Gomes, Andre C. B.; Machado, Liliane S. (2009) Calibracao de Propriedades Materiais
para Incorporacao de Toque em Sistemas de Realidade Virtual. In: Workshop de Realidade
Virtual e Aumentada - WRVA'2009, Santos/SP. Anais do Workshop de Realidade Virtual e
Aumentada.
C86) Carvalho Jr., Antonio D.; Souza, Daniel F. L; Machado, Liliane S. (2009) Utilizacao de
Rastreadores Magneticos no Desenvolvimento de Aplicacoes com Realidade Virtual para a
Educacao. In: Workshop de Realidade Virtual e Aumentada - WRVA'2009, Santos. Anais do
Workshop de Realidade Virtual e Aumentada.
C87) Santos, A.D.; Machado, L.S. (2009) Realidade Virtual Aplicada ao Ensino de Medicina:
Taxonomia, Desafios e Resultados. In: Anais do Workshop de Realidade Virtual e
Aumentada, Santos, Brazil. CDROM.
C88) TORI, R.; NUNES, F.L.S.; Nakamura, R., Bernardes Junior, J. L., CORRÊA, C.G.,
TOKUNAGA, D.M. (2009) Design de Interação para um Atlas Virtual de Anatomia Usando
Realidade Aumentada e Gestos In: Interaction 2009 - South America, 2009, São Paulo - SP.
Proceedings of Interaction 2009 - South America., v.1. p.1–8.
C89) CORRÊA, C.G.; NUNES, F.L.S.; BEZERRA, A.; CARVALHO JUNIOR, P.M. (2009)
Evaluation of VR Medical Training Applications under the Focus of Professionals of the Health
Area In: The 24th Annual ACM Symposium on Applied Computing, 2009, Honolulu, Hawaii,
USA. Proceedings of The 24th Annual ACM Symposium on Applied Computing. New York,
USA: Association of Computer Machinery, v.1. p.821–825.
C90) OLIVEIRA, A.C.M.T.G.; NUNES, F.L.S. (2009) ViMeT and ViMetWizard: Process of
Building a Framework and an Instantiation Tool for Programming Application in Medical
Training Domain Using Free Technology In: The 2009 Annual Meeting of the Society for
Imaging Informatics in Medicine, 2009, Charlotte (NC). Proceedings of The 2009 Annual
111
Meeting of the Society for Imaging Informatics in Medicine. Charlotte (NC):, v.1. p.50-52.
C91) BEZERRA, A.; NUNES, F.L.S., CORRÊA, C.G. (2009) Avaliação de uma luva de dados
em um sistema virtual para aplicações de treinamento médico In: WIM2009 - IX Workshop de
Informática Médica / XXIX Congresso da Sociedade Brasileira de Computação, Bento
Gonçalves (RS). Anais do XXIX Congresso da Sociedade Brasileira de Computação. Porto
Alegre (RS): Sociedade Brasileira de Computação, v.1.
C92) TORI, R., NUNES, F.L.S.; GOMES, V.H.P.; TOKUNAGA, D.M. (2009) VIDA: Atlas
Anatômico 3D Interativo para Treinamento a Distância In: WIE2009 - X Workshop de
Informática na Escola / XXIX Congresso da Sociedade Brasileira de Computação, 2009,
Bento Gonçalves (RS). Anais do XXIX Congresso da Sociedade Brasileira de Computação.
Porto Alegre (RS): Sociedade Brasileira de Computação.
C93) NUNES, F.L.S.; Maganha, C.R.; CORRÊA, C.G.; TORI, R.; Barbosa, J.H.A.; Picchi, F.L.;
Nakamura, R.A. (2010) Importância da avaliação na engenharia de requisitos em sistemas
de Realidade Virtual e Aumentada: um estudo de caso In: XII Symposium on Virtual and
Augmented Reality, 2010, Natal (RN). Proceedings of XII Symposium on Virtual and
Augmented Reality. Natal (RN): Sociedade Brasileira de Computação.
C94) NUNES, F.L.S.; DELAMARO, M.E.; OLIVEIRA, R.A.P. (2009) Oráculo gráfico como
apoio na avaliação de sistemas de auxílio ao diagnóstico In: WIM2009 - IX Workshop de
Informática Médica / XXIX Congresso da Sociedade Brasileira de Computação, 2009, Bento
Gonçalves (RS). Anais do XXIX Congresso da Sociedade Brasileira de Computação. Porto
Alegre (RS): Sociedade Brasileira de Computação.
C95) TOKUNAGA, D.M.; CORRÊA, C.G.; NAKAMURA, R.; NUNES, F.L.S.; TORI, R. (2010)
Non-Photorealistic Rendering in Stereoscopic 3D Visualization. In: Poster SIGGRAPH – 37th
International Conference and Exhibition on Computer Graphics and Interactive Techniques,
Los Angeles, Califórnia, USA.
C96) RÉZIO, A.C.C., PEDRINI, H. (2010) Avaliação de interpoladores para super-resolução
de vídeos. VI Workshop de Visão Computacional, Presidente Prudente-SP, Brazil.
C97) ABREU, C.G.; PARENTE, M.R.; BRASIL, L.M. ; MELO, J.S.S.; SILVA, A.P.B.; SOUZA,
G.D. (2009) A Study of Virtual Reality in Mastology. In: World Congress on the Medical
Physics and Biomedical Engineering - WC2009, Munique. World Congress on the Medical
Physics and Biomedical Engineering - WC2009. Berlim: Springer Berlin Heidelberg, v. 25/IV.
p. 1115-1118.
C98) MELO, J.S.S.; BRASIL, L.M. ; BALANIUK, R.; FERNEDA, E.; SANTANA, J.S. (2009)
Intelligent Tutoring Systems Based on Ontologies and 3D Visualization Platforms in The
Teaching of The Human Anatomy. In: World Congress on the Medical Physics and Biomedical
Engineering - WC2009, Monique. World Congress on the Medical Physics and Biomedical
Engineering - WC2009. Berlim : Springer Berlin Heidelberg, v. 25/IV. p. 16-19.
C99) PRETTO, Fabricio, MANSSOUR, Isabel Harb, SILVA, E., Lopes, Maria Helena Itaqui,
PINHO, M. S.
C100) Augmented Reality Environment for Life Support Training In: 24th Annual ACM
Symposium on Applied Computing, 2009, Honolulu, Hawaii, USA. 24th Annual ACM
Symposium on Applied Computing. ACM, 2009. v.1. p.164 - 1692.
112
C101) TROMBETTA, A. B., PINHO, M. S.Projeto e Desenvolvimento de um Dispositivo de
Interação para Visualização de Imagens Médicas em Ambientes Imersivos In: XI Symposium
on Virtual and Augmented Reality, 2009, Porto Alegre. XI Symposium on Virtual and
Augmented Reality. Porto Alegre: Sociedade Brasileita de Computação, 2009. v.1. p.278 –
288.
C102) Tales Nereu Bogoni, PINHO, M. S. Sistema para Monitoramento de Técnicas de
Direção Econômica em Caminhões com Uso de Ambientes Virtuais Desktop In: XI
Symposium on Virtual and Augmented Reality, 2009, Porto Alegre. XI Symposium on Virtual
and Augmented Reality. Porto Alegre: Sociedade Brasileita de Computação, 2009. v.1. p.103
- 113
C103) RIEDER, Rafael, PINHO, M. S., RAPOSO, A. Using Petri Nets to Specify Collaborative
Three Dimensional Interaction In: 13th International Conference on Computer Supported
Cooperative Work in Design, 2009, Santiago. Proceedings of the 2009 13th International
Conference on Computer Supported Cooperative Work in Design. IEEE Computer Society,
2009. p.456 - 461
C104) BACIM, Felipe, BOWMAN, Doug A., PINHO, M. S. Wayfinding Techniques for
Multiscale Virtual Environments In: IEEE Symposium on 3D User Interfaces 2009, 2009,
Lafayette, Louisiana, USA. IEEE Symposium on 3D User Interfaces 2009. IEEE, 2009. v.1.
p.1 – 8
C105) SANTOS, S. R., SILVA, B. M. F., OLIVEIRA, Jauvane C. (2009) “Camera Control
Based on Rigid Body Dynamics for Virtual Environments”.
In: IEEE International
Conference on Virtual Environments, Human-Computer Interfaces, and Measurement
Systems, VECIMS 2009, May 11-13, 2009, Hong Kong, China; IEEE publishers, v.1. p.344 –
349
C106) SANTOS, S. R., BEZERRA, L., FEITOSA NETO, A. A., MALFATTI, Silvano
(2009)“FAITH: A Desktop Virtual Reality System for Fingerspelling”; In: XI SYMPOSIUM
ON VIRTUAL AND AUGMENTED REALITY, SVR 2009, 2009, Porto Alegre. Anais do XI
SYMPOSIUM ON VIRTUAL AND AUGMENTED REALITY. Porto Alegre: SBC, v.1. p.189 –
198.
C107) SILVA, B. M. F., SANTOS, S. R., OLIVEIRA, Jauvane C. (2009)“Using a Physicallybased Camera to Control Travel in Virtual Environments”. In: XI SYMPOSIUM ON VIRTUAL
AND AUGMENTED REALITY, SVR 2009, 2009, Porto Alegre. Anais do XI SYMPOSIUM ON
VIRTUAL AND AUGMENTED REALITY. Porto Alegre: SBC, v.1. p.146 – 156.
C108) TRENHAGO, P. ; de Oliveira, J. C. (2010) Ambiente de Realidade Virtual Imersivo para
Visualização de Dados Biológicos. In: XII Symposium on Virtual and Augmented Reality,
2010, Natal, RN. Proceedings of the SVR2010. Porto Alegre, RS : Sociedade Brasileira de
Computaçãov. 1. p. 222-229.
C109) SANTOS, S. R. ; SILVA, B. M. F. ; OLIVEIRA, J. C. (2009) Camera Control Based on
Rigid Body Dynamics for Virtual Environments. In: 2009 IEEE International Conference on
Virtual Environments, Human-Computer Interfaces and Measurement Systems (VECIMS),
2009, Hong Kong, China. 2009 IEEE International Conference on Virtual Environments,
Human-Computer Interfaces and Measurement Systems Conference Proceedings. Los
113
Alamitos, CA, EUA : IEEE.
C110) SANTOS, S. R. ; SILVA, B. M. F. ; OLIVEIRA, J. C. (2009) Using a Physically-based
Camera to Control Travel in Virtual Environments. In: Simposium on Virtual and Augmented
Reality, 2009, Porto Alegre-RS. Anais do XI Simposium on Virtual and Augmented Reality.
Porto Alegre-RS : Sociedade Brasileira de Computação.
C111) CORDEIRO JÚNIOR, A. A. ; FRAGOSO, M. D. ; GEORGANAS, N. D. ; OLIVEIRA, J.
C. . The Markovian Jump Contour Tracker. In: 17th IFAC World Congress, 2008, Seul, Korea.
17th IFAC World Congress Proceedings, 2008.
C112) MALFATTI, S. M. ; SANTOS, S. R. ; FRAGA, L. M. ; JUSTEL, C. M. ; OLIVEIRA, J. C.
(2008) EnCIMA: A Graphics Engine for the Development of Multimedia and Virtual Reality
Applications. In: X Symposium on Virtual and Augmented Reality, 2008, João Pessoa, PB. X
SVR Conference Proceedings.
C113) SANTOS, S. R. ; OLIVEIRA, J. C. ; FRAGA, L. M. ; TRENHAGO, P. ; MALFATTI, S. M.
(2008) Using a Rendering Engine to Support the Development of Immersive Virtual Reality
Applications. In: IEEE INTERNATIONAL CONFERENCE ON VIRTUAL ENVIRONMENTS,
HUMAN-COMPUTER INTERFACES, AND MEASUREMENT SYSTEMS, 2008, Istanbul,
Turquia. IEEE VECIMS Prodeeding.
C114) MALFATTI, S. M. ; FRAGA, L. M. ; OLIVEIRA, J. C. ; SANTOS, S. R. ; ROSA, P. F. F.
(2008) Um Atlas 3D Háptico para o Estudo de Anatomia. In: Workshop de Informática Médica,
2008, Belém, PA. Anais do Workshop de Informática Médica. Porto Alegre, RS : Sociedade
Brasileira de computação.
C115) TRENHAGO, P. ; SANTOS, S. R. ; OLIVEIRA, J. C. (2008) Infra-estrutura de Baixo
Custo para Visualização 3D Estereoscópica Destinada a Aplicações Biológicas e Biomédicas.
In: X Symposium on Virtual and Augmented Reality, 2008, João Pessoa, PB. X SVR
Conference Proceedings. Porto Alegre, RS : SBC.
C116) Albuquerque, L. L. ; MALFATTI, S. M. ; de Oliveira, J. C. ; SALLES, R. M. (2010) Uma
Camada de Comunicação sem Servidor para Ambientes Virtuais Colaborativos. In: XII
Symposium on Virtual and Augmented Reality, 2010, Natal, RN. Proceedings of the
SVR2010. Porto Alegre, RS : Sociedade Brasileira de Computação, v. 1. p. 1-4.
C117) MALINOSKI, I. ; VICOSO, R. P. ; CORREA, B. S. P. M. ; GOMES, A. T. A. ; ZIVIANI, A.
. Suporte Remoto ao Atendimento Médico Emergencial via Dispositivos Móveis. In: Workshop
de Informática Médica (WIM), 2009, Bento Gonçalves, RS - Brasil. Anais do IX Workshop de
Informática Médica, 2009
C118) CAVALINI, L. T. ; MIRANDA-FREIRE, S. ; COOK, T. W. Forward Chaining Inference vs.
Binary Decision Support in an Electronic Health Record Application Based on Archetyped
Data. In: MEDINFO – International Congress of Medical Informatics, 2010 (accepted as a
poster).
C119) VALLE, R. ; SAADE, Débora Christina Muchaluat . Desempenho do Plugin OLSR-BMF
para Comunicação Multicast em Redes Mesh. In: Workshop de Gerência de Redes e
Serviços, 2009, Recife. XIV WGRS, 2009. p. 126-139.
C120) RIBEIRO, C. H. P. ; Saade, D.C. Muchaluat . M-TFRC: Adaptação de Mecanismo de
Congestionamento do Protocolo de Transporte DCCP para Uso em Redes Mesh sem Fio. In:
114
8th International Information and Telecommunications Symposium, 2009, Florianópolis. I2TS
2009, 2009.
C121) Wanderley, B. L. ; JUSTEN, A. F. A. ; Saade, D.C. Muchaluat . TC MESH: Uma
Ferramenta de Gerência de QoS para Redes em Malha sem Fio. In: 8th International
Information and Telecommuncations Symposium, 2009, Florianópolis. I2TS 2009, 2009.
C122) GERK, L. F. ; Saade, D.C. Muchaluat . Solução de QoS para Redes em Malha sem Fio
baseada no Padrão IEEE 802.11e. In: 8th International Information and Telecommunications
Symposium, 2009, Florianópolis. I2TS 2009, 2009.
C123) I.A.Chaves, R.B.Braga, R.M.C.ANDRADE, J.N. de Souza, B.Schulze, Um Mecanismo
Eficiente de Confiança para a Detecção e Punição de Usuários Maliciosos em Grades Peerto-peer. Anais do VIII Workshop em Clouds, Grids e Aplicações (WCGA2010), 2010,
Gramado – RS, SBC, 2010. p.143 – 156
C124) T.C. de Mello, B.Schulze, R.C. Gomes Pinto, A.R.Mury, Uma análise de recursos
virtualizados em ambiente de HPC, Anais do VIII Workshop em Clouds, Grids e Aplicações
(WCGA2010), 2010, Gramado – RS, SBC, 2010. p.17 – 30
C125) Oliveira, C. R. S. ; Souza, Wanderley Lopes de ; Guardia, H. C. . Uma Arquitetura de
Segurança baseada em Serviços para Grid Services. Anais do VII Workshop de Computação
em Grade e Aplicações - WCGA, 2009, Recife – PE, p. 25-36.
C126) Ferro, M., Mury, A. R., Schulze, B.R.; 'A Proposal of Prediction and Diagnosis in Grid
Computing Self-Healing Problem'; Procceedings of the 3rd Intl. Latin American Grid
Workshop (LAGrid09); Sao Paulo - SP; 28/10/2009
C127) Costa, R. G., Barbosa, A., Bortoln, S., Schulze, B.R.; 'A Grid-based Infrastructure for
Interoperability of Distributed and Heterogeneous PACS'; Proceedings of the 3rd Intl. Latin
American Grid Workshop (LAGrid09); Sao Paulo - SP; Out.2009
C128) Ferro, M., Mury, A. R., Schulze, B.R.; 'Applying Inductive Logic Programming to SelfHealing Problem in Grid Computing: Is it a feasible task?'; Proceedings of the Third
International Conference on Advanced Engineering Computing and Applications in Sciences
ADVCOMP 2009 ; Out.2009
C129) Bandini, M B, Mury, A. R., B. Schulze et al., A Grid–QoS Decision Support System
using Service Level Agreements In: Anais do XXIX Congresso da Sociedade Brasileira de
Computação (CSBC). Sociedade Brasileira de Computação (SBC), 2009. p.249 – 263
C130) Rios, R A, Jacinto, D S, B. Schulze et al., Análise de Heurísticas para Escalonamento
Online de Aplicações em Grade Computacional In: Anais VII Workshop de Computação em
Grade e Aplicações. Sociedade Brasileira de Computação (SBC), 2009. p.13 – 24
C131) Braga, R B, Chaves, I A, ANDRADE, Rossana Maria de Castro, B. Schulze et al.,
Modelos Probabilísticos de Confiança para Grades Computacionais Ad Hoc In: Anais VII
Workshop de Computação em Grade e Aplicações. Sociedade Brasileira de Computação
(SBC), 2009. p.37 – 50
C132) Sardina, I. M., Boeres, C., Drummond, L. M. A., An Efficient Weighted Bi-Objective
Scheduling Algorithm for Heterogeneous Systems In: The 7th Intl. Workshop Heteropar 2009,
Delft. LNCS. New York: Springer-Verlag, 2009. p.1 – 10
115
C133) Sardina, I. M., Boeres, C., Drummond, L. M. A.l Escalonamento Bi-objetivo de
Aplicações Paralelas em Recursos Heterogêneos, Anais do XXVII Simp. Brasileiro de Redes
de Computadores e Sistemas Distribuídos, 2009, Recife-PE, SBC. p.467 – 480
C134) F. G. Oliveira, e V.E.F. Rebello. Algoritmos Branch-and-Prune Autônomos. Nos Anais
do 28º Simpósio Brasileiro de Redes de Computadores e de Sistemas Distribuídos.
Gramado, Brasil, maio 2010.
C135) A.C. Sena, C. Boeres e V.E.F. Rebello. Um Modelo Alternativo para Execução Eficiente
de Aplicações Paralelas MPI nas Grades Computacionais. No Concurso de Teses e
Dissertações em Arquitetura de Computadores e Computação de Alto Desempenho WSCAD-SCC 2009, São Paulo, outubro 2009.
C136) A.P. Nascimento, A.C. Sena, C. Boeres e V.E.F. Rebello. On the Feasibility of
Dynamically Scheduling DAG Applications on Shared Heterogeneous Systems. Em H. Sips,
D. Epema, and H.-X. Lin, editors, The Proceedings of the 15th International Euro-Par
Conference on Parallel Computing (EuroPar 2009), LNCS 5704, pp. 191--202, Delft, Holland,
August 2009.
5.4) D.Sc. theses, M.Sc. dissertations and undergraduate monographs
T1) Rafael Alves Bonfim de Queiroz. Métodos Numéricos para Interação Fluido-Estrutura e
Análise de Sensibilidade à Mudança de Forma em Hemodinâmica. Start: 2009. Thesis
(Doctoral degree in Computational Modeling) - Laboratório Nacional de Computação
Científica. Advisor: Raúl A. Feijóo. Co-advisor: Pablo J. Blanco.
T2) Daniel Reis Golbert. Métodos de Lattice-Boltzmann para a Modelagem Computacional do
Sistema Cardiovascular Humano. Start: 2009. Thesis (Doctoral degree in Computational
Modeling) - Laboratório Nacional de Computação Científica. Advisor: Raúl A. Feijóo. Coadvisor: Pablo J. Blanco.
T3) Mario Sansuke Maranhão Watanabe. Modelagem dos Sistemas Arterial-Venoso através
do Acoplamento de Modelos 0D-1D-3D. Start: 2009. Thesis (Doctoral degree in
Computational Modeling) - Laboratório Nacional de Computação Científica. Advisor: Raúl A.
Feijóo. Co-advisor: Pablo J. Blanco
T4) Paulo Roberto Trenhago. Modelagem dos Mecanismos de Autoregulação e Controle no
Sistema Cardiovascular Humano. Start: 2010. Thesis (Doctoral degree in Computational
Modeling) - Laboratório Nacional de Computação Científica. Advisor: Pablo J. Blanco. Coadvisor: Jauvane C. Oliveira.
T5) Karine Damásio Guimarães. Acoplamento iterativo forte de modelos dimensionalmente
heterogêneos na modelagem do sistema cardiovascular. Start: 2009. Dissertation (Master
degree in Computational Modeling) - Laboratório Nacional de Computação Científica.
Advisor: Pablo J. Blanco.
T6) Jorge Martín Pérez Zerpa. Caracterização de Propriedades Mecânicas em Artérias
usando um Algoritmo de Ponto Interior. Start: 2010. Dissertation (Máster Degree in
Mechanical Engineering). Advisor: José Herskovitz Norman.
T7) Maurício Higa. Quantificação e Análise de Artéria Carótida em Imagens de Ultra-som.
Finished: 27/11/2009. Dissertation (Master degree in Electrical Engineering). 79p. Polytechnic
116
School – University of Sao Paulo. Advisor: Marco Antonio Gutierrez.
T8) Fernando José Ribeiro Sales. Análise e quantificação tridimensional em imagens de
ultrasom intravascular. 101p. Finished 2009. Ph.D. Thesis. University of Sao Paulo Medical
School. Advisor: Sérgio Shiguemi Furuie.
T9) Anders Holch Heebøll-Holm, Morten Schøler Kristensen. Visualisation and analysis of left
ventricular motion. Undergraduate monograph. Start: 2010. Agreement Aalborg University
Denmark, Dept. of Health Science and Technology – Informatics Division Heart Institute.
Advisor: Marco Antonio Gutierrez. Co-advisor: Marina de Sá Rebelo.
T10) Fábio Antero Pires. Mineração de Dados em Bases de Saúde Pública. Start: 2007.
Thesis (PhD - Cardiology) - University of Sao Paulo Medical School. Advisor: Marco A.
Gutierrez.
T11) Jurema da Silva Herbas Palomo. Avaliação da eficiência e eficácia do registro eletrônico
da sistematização da assistência de enfermagem em unidades de terapia cirúrgica em
cardiologia. Start: 2006. Thesis (PhD - Cardiology) - University of Sao Paulo Medical School.
Advisor: Marco A. Gutierrez.
T12) Lilian Contin. Delimitação da área de penumbra em acidente vascular cerebral
isquêmico utilizando imagens de tomografia computadorizada de perfusão. Start: 2009.
Thesis (Ph.D. - Science) – University of Sao Paulo Medical School. Advisor: Griselda J.
Garrido.
T13) Pedro Lopes de Souza. Utilização da plataforma MEVISLAB para processamento e
quantificação de imagens cardíacas. Start: 2009. Undergraduate monograph project.
Biomedical Informatics University of Sao Paulo - Ribeirão Preto. Advisor: Marco A. Gutierrez.
T14) Alexandre de Lacassa. Geração de malhas volumétricas utilizando técnicas de
paralelismo. Start: 2007. Thesis (Doctoral degree in Computer Science and Computational
Mathematics) - Instituto de Ciências Matemáticas e de Computação, University of Sao Paulo.
Advisor: Antonio Castelo Filho.
T15) Marcos Aurélio Batista. Navier-Stokes em Imagens. Start: 2007. Thesis (Doctoral degree
in Computer Science and Computational Mathematics) - Instituto de Ciências Matemáticas e
de Computação, University of Sao Paulo. Advisor: Luis Gustavo Nonato.
T16) Lais Correa. Um esquema de convecção para leis de conservação com aplicação em
escoamentos incompressíveis 3D com superfícies livres móveis. Start: 2009. Dissertation
(Master degree in Computer Science and Computational Mathematics) Instituto de Ciências
Matemáticas e de Computação, University of Sao Paulo. Advisor: Valdemir Garcia Ferreira.
T17) Patricia Sartori. Desenvolvimento de um esquema upwind TVD com aplicações em
problemas de dinâmica dos fluidos. Start: 2009. Dissertation (Master degree in Computer
Science and Computational Mathematics) Instituto de Ciências Matemáticas e de
Computação, University of Sao Paulo. Advisor: Valdemir Garcia Ferreira.
T18) Giseli Aparecida Braz de Lima. Simulação Computacional de Escoamentos Turbulentos
de Fluidos Não Newtonianos com Superfícies Livres. Start: 2010. Thesis (Doctoral degree in
Computer Science and Computational Mathematics) Instituto de Ciências Matemáticas e de
Computação, University of Sao Paulo. Advisor: Valdemir Garcia Ferreira.
117
T19) Jorge Poco Medina. Visualização Tensorial Rápida baseada em Pojeções. Start: 2009.
Dissertation (Master degree in Computer Science and Computational Mathematics) Instituto
de Ciências Matemáticas e de Computação, University of Sao Paulo. Advisor: Rosane
Minghim.
T20) Marcel Yugo Nakasaki. Visualização Multidimensional aplicada à área médica. Start:
2006. Dissertation (Master degree in Computer Science and Computational Mathematics)
Instituto de Ciências Matemáticas e de Computação, University of Sao Paulo. Advisor:
Rosane Minghim.
T21) José Gustavo de Souza Paiva. Classificação Visual de Coleções de Imagens. Start:
2009. Thesis (Doctoral degree in Computer Science and Computational Mathematics)
Instituto de Ciências Matemáticas e de Computação, University of Sao Paulo. Advisor:
Rosane Minghim.
T22) Kátia Felizardo. Uma abordagem de Mineração Visual para o processo de Revisão
Sistemática. Start: 2009. Thesis (Doctoral degree in Computer Science and Computational
Mathematics) Instituto de Ciências Matemáticas e de Computação, University of Sao Paulo.
Advisor: Rosane Minghim.
T23) Thiago Silva Reis Santos. Visualização exploratória de volumes de dados multimodais
apoiada por técnicas de projeção multidimensional. Start: 2010. Dissertation (Master degree
in Computer Science and Computational Mathematics) Instituto de Ciências Matemáticas e
de Computação, University of Sao Paulo. Advisor: Maria Cristina Ferreira de Oliveira.
T24) Glenda Michele Botelho. Mineração visual de dados por meio de projeção e seleção de
características. Start: 2009. Dissertation (Master degree in Computer Science and
Computational Mathematics) Instituto de Ciências Matemáticas e de Computação, University
of Sao Paulo. Advisor: João Batista Neto.
T25) Bruno Brandoli Machado. Uma abordagem de reconhecimento de padrões aplicada à
projeção multidimensional de dados. Start: 2008. Dissertation (Master degree in Computer
Science and Computational Mathematics) Instituto de Ciências Matemáticas e de
Computação, University of Sao Paulo. Advisor: João Batista Neto.
T26) Sergio Francisco da Silva. Seleção de Características de Imagens Médicas por meio de
Algoritmos Genéticos. Start: 2008. Thesis (Doctoral degree in Computer Science and
Computational Mathematics) Instituto de Ciências Matemáticas e de Computação, University
of Sao Paulo. Advisor: João Batista Neto.
T27) Felipe Montefuscolo. Métodos numéricos para escoamentos com linhas de contato
dinâmicas. Start: 2010. Dissertation (Master degree in Computer Science and Computational
Mathematics) Instituto de Ciências Matemáticas e de Computação, University of Sao Paulo.
Advisor: Fabrício Simeoni de Sousa.
T28) Alysson Alexander Naves Silva. Controle e adaptação de malhas tetraedrais para
escoamentos de fluidos. Start: 2010. Thesis (Doctoral degree in Computer Science and
Computational Mathematics) Instituto de Ciências Matemáticas e de Computação, University
of Sao Paulo. Advisor: Fabrício Simeoni de Sousa.
T29) Josuel Kruppa Rogenski. Desenvolvimento e otimização de um código paralelizado para
simulação de escoamentos incompressíveis. Start: 2009. Dissertation (Master degree in
118
Computer Science and Computational Mathematics) Instituto de Ciências Matemáticas e de
Computação, University of Sao Paulo. Advisor: Leandro Franco de Souza.
T30) Larrisa Alves Petri. Simulação Numérica Direta de Escoamento Transicional sobre uma
Superfície contendo Rugosidade. Start: 2010. Thesis (Doctoral degree in Computer Science
and Computational Mathematics) Instituto de Ciências Matemáticas e de Computação,
University of Sao Paulo. Advisor: Leandro Franco de Souza.
T31) Jorge Leiva. Metodos de elementos finitos para hemodinamica computacional. Start:
2006. Thesis (Doctoral degree in Engineering Sciences) Instituto de Física Dr. J.A. Balseiro,
Bariloche, Argentina. Advisor: Gustavo Carlos Buscaglia.
T32) Italo Valença Mariotti Tasso. Modelagem numérica de interfaces fluidicas complexas em
microescala. Start: 2010. Thesis (Doctoral degree in Computer Science and Computational
Mathematics) Instituto de Ciências Matemáticas e de Computação, University of Sao Paulo.
Advisor: Gustavo Carlos Buscaglia.
T33) Selke, AE. Modelo Constitutivo Variacional de Viscoplasticidade em Regime de Grandes
Deformações para um Problema Adiabático Termomecanicamente Acoplado. Finished: 2009.
Dissertation (Master degree in Mechanical Engineering) Universidade Federal de Santa
Catarina. Advisor: Eduardo Alberto Fancello. Co-advisor: Laurent Stainier.
T34) Lazzaroni, A. Técnicas de Reparo de Cartilagem Articular. Finished: 2009. Graduate
Monograph (Graduate degree in Medicine) Universidade Federal de Santa Catarina. Advisor:
Ari Digiácomo Ocampo Moré. Co-advisor: Carlos Rodrigo de Mello Roesler.
T35) Reis, DB. Análise Normativa de Ensaios Mecânicos de Componentes de Osteossíntese
e Caracterização de um Novo Procedimento de Ensaio. Start: 2009. Dissertation (Master
degree in Mechanical Engineering) Universidade Federal de Santa Catarina. Advisor: Edison
da Rosa. Co-advisor: Eduardo A. Fancello,.Carlos R. de Mello Roesler. Estimated date of
completion: May 2010
T36) Bento, D, Desenvolvimento de um simulador para ensaios de desgaste em prótese de
quadril. Start: 2007. Thesis (Doctoral degree in Mechanical Engineering) Universidade
Federal de Santa Catarina. Advisor: Edison da Rosa. Estimated date of completion: June
2010
T37) Martins, J. Avaliação Biomecânica de Diferentes Técnicas para Tratamento Cirúrgico da
Coluna Vertebral Humana. Start: 2008. Dissertation (Master degree in Mechanical
Engineering) Universidade Federal de Santa Catarina. Advisor: Edison da Rosa. Co-advisorCarlos Rodrigo de Mello Roesler. Estimated date of completion: June 2011
T38) Braga Junior, RV. Comparação Experimental da Resistência Mecânica de Fixações
Ligamentares Utilizando Parafuso de Interferência de Diferentes Diâmetros. Start: 2010.
Monograph (Graduate degree in Medicine) Universidade Federal de Santa Catarina. Advisor:
Ari Digiácomo Ocampo Moré. Co-advisor: Carlos Rodrigo de Mello Roesler.
T39) Batschauer, A. Análise comparativa de técnicas de correlação de imagens digitais para
medição do campo de deslocamento de um corpo de prova em ensaio mecânico. Start: 2009.
Scientific Iniciation Monograph (Initiation in Sciences in Mechanical Engineering)
Universidade Federal de Santa Catarina. Advisor: Eduardo Fancello. Co-advisor: Jakson
Manfredini Vassoler. Estimated date of completion: June 2010
119
T40) Pavan, RB. Estudo Biomecânico e Análise de uma Protese de Joelho, Start: 2010.
Graduate Monograph (Graduate degree in Mechanical Engineering) Universidade Federal de
Santa Catarina. Advisor: Eduardo Alberto Fancello. Co-advisor: Carlos Rodrigo de Mello
Roesler.
T41) Erick Martins Ratamero. Modeling Reports Clinical Examination of Lumbar Spine
Magnetic Resonance Imaging Using Archetypes OpenEHR. Finished: 2010. Federal
Fluminense University. Supervisor: Débora Cristina Muchaluat Saade; Co-supervisor: Timothy
Wayne Cook.
T42) Jean Carlo de Souza Santos. Proposta de um método para geração automática de
regras fuzzy dos rough sets. 2009 – Faculdade de Computação – Universidade Federal de
Uberlândia. Supervisor: Denise Guliato.
T43) Marcos Fuzzaro. Estudo de métodos para redução do espaço característica. Finished:
2009. Thesis - Universidade Federal de Uberlândia Advisor: Denise Guliato.
T44) Lidia Moraes. Construção de uma base de dados com objetos 3D. Finished: 2010.
Thesis - Universidade Federal de Uberlândia. Advisor: Denise Guliato.
T45) Mario Augusto de Souza Lizier. Geração e refinamento de malhas segmentadas a partir
de imagens com textura. Finished: 2009. Thesis (Doctoral degree in Computer Science and
Computational Mathematics) - Instituto de Ciências Matemáticas e de Computação,
University of Sao Paulo. Advisor: Luis Gustavo Nonato.
T46) Cláudio Haruo Yamamoto. Visualização como Suporte à Extração e Exploração de
Regras de Associação. Finished: 2009. Thesis - University of São Paulo-. Advisor: Maria
Cristina Ferreira de Oliveira.
T47) Roberto Dantas de Pinho. Espaço incremental para a mineração visual de conjuntos
dinâmicos de documentos. Finished: 2009. Thesis - Universidade de São Paulo. Advisor:
Maria Cristina Ferreira de Oliveira. Co-advisor: Rosane Minghim.
T48) Marcio Oliveira Almeida. Avaliação de desempenho de algoritmos paralelos para uma
plataforma de mineração visual. Finished: 2009. Thesis – University of São Paulo. Advisor:
Maria Cristina Ferreira de Oliveira.
T49) Rafael Mitsuo Maki. Visualização como apoio à análise de dados de biosensores.
Finished: 2009. Graduate Monograph – University of São Paulo. Advisor: Maria Cristina
Ferreira de Oliveira.
T50) Pollyana Marques de Moura. Avaliação da Expansão Rápida da Maxila por Meio de
Tomografia Multislice e Cone-Beam: Finished: 2010. Dissertation (Master Degree in Medicine
(Radiology)) - Universidade Federal do Rio de Janeiro. Advisor: Bianca Gutifilen.
T51) Diego Augusto Thomaz Quadrado Leite. PyImageVis: Plataforma Python para
Processamento e Visualização de Imagens Médicas. Finished: 2010. Graduate Monograph
(Technich Degree in Information and Communication) – Instituto Superior de Tecnologia em
Ciência da Computação de Petrópolis. Advisor: Gilson Antonio Giraldi.
T52) Bruno Rafael de Araújo Sales. Collaboration in VR Systems for Medical Training: A
Module for The CyberMed Framework. 2010. Dissertation (Master in Informatics) - Federal
University of Paraiba. Scholarship: CAPES. Supervisors: Liliane S. Machado e Ronei M.
120
Moraes. In Portuguese.
T53) Janio Araruna Carvalho. New Technologies for Training of Gynecological Exam. 2010.
Scientific Initiation in Medicine. Federal University of Paraiba. Supervisor: Liliane dos Santos
Machado. In Portuguese.
T54) Rafael Henrique Assis de Castro. Haptic Calibrator. 2010. Scientific Initiation in
Computer Science. Federal University of Paraiba. Supervisor: Liliane dos Santos Machado. In
Portuguese.
T55) Antonio Deusany de Carvalho Jr.. Integration of a Module for Magnetic Tracking in
CyberMed. 2010. Computer Science Graduation Monograph. Federal University of Paraiba.
Supervisor: Liliane dos Santos Machado. In Portuguese.
T56) Antonio Deusany de Carvalho Jr.. Recognition and Import of 3D models in CyberMed.
2009. Scientific Initiation in Computer Science. Federal University of Paraiba. Supervisor:
Liliane dos Santos Machado. In Portuguese..
T57) Paulo Rodrigues Felisbino (Supervisor: Fátima L. S. Nunes). Implementação de
realismo em ferramentas de realidade virtual para treinamento médico, Sistemas de
Informação Course - Escola de Artes Ciências e Humanidades; Sponsored by Universidade
de São Paulo.
T58) Lucas Prieto Nogueira (Supervisor: Fátima L. S. Nunes). Avaliação automatizada de
ferramentas de realidade virtual para treinamento medico, Sistemas de Informação Course Escola de Artes Ciências e Humanidades (Universidade de São Paulo); Sponsored by
Universidade de São Paulo.
T59) Ana Claudia Carreira Frata, (Supervisor: Ildeberto Rodello). Título: Avaliação heurística
da aplicação Wizard do framework ViMeT, Informática Biomédica Course - Faculdade de
Filosofia, Ciências e Letras de Ribeirão Preto (Universidade de São Paulo); Sponsored by
Universidade de São Paulo.
T60) Tales Nereu Bogoni. Sistema para Monitoramento de Técnicas de Direção Econômica
em Caminhões com Uso de Ambientes Virtuais Desktop. 2009. Dissertação (Ciência da
Computação) - Pontifícia Universidade Católica do Rio Grande do Sul.
T61) Fabricio Pretto. Uso de Realidade Virtual e Aumentada do Treinamento de Emergências
Médicas. 2006. Dissertação (Ciência da Computação) - Pontifícia Universidade Católica do
Rio Grande do Sul.
T62) André Benvenutti Trombetta. Um Dispositivo De Interação Em Ambientes Virtuais De
Visualização. 2008. Dissertação (Ciência da Computação) - Pontifícia Universidade Católica
do Rio Grande do Sul.
T63) Albino Adriano Alves Cordeiro Junior. Modelos e Métodos para Interação HomemComputador Usando Gestos Manuais. 2009. 0 f. Tese (Doutorado em Doutorado Em
Modelagem Computacional) - Laboratório Nacional de Computação Científica, Fundação
Carlos Chagas Filho de Amparo à Pesq. do Estado do Rio de Janeiro. Advisor: Jauvane
Cavalcante de Oliveira.
T64) Silvano Maneck Malfatti. ENCIMA - UM MOTOR PARA O DESENVOLVIMENTO DE
APLICACÇÕES DE REALIDADE VIRTUAL. 2009. Dissertação (Mestrado em Sistemas e
121
Computação) - Instituto Militar de Engenharia, . Advisor: Jauvane Cavalcante de Oliveira.
T65) Luciane Machado Fraga. PROPOSTA DE UM MÉTODO PARA OTIMIZAR DETECÇÂO
DE COLISÕES UTILIZANDO ÁREAS DE INTERESSE. 2009. Dissertação (Mestrado em
Sistemas e Computação) - Instituto Militar de Engenharia, . Advisor: Jauvane Cavalcante de
Oliveira.
T66) Paulo Roberto Trenhago. Ambiente de Realidade Virtual Automático para Visualização
de Dados Biológicos. 2009. Dissertação (Mestrado em Modelagem Computacional) Laboratório Nacional de Computação Científica, Coordenação de Aperfeiçoamento de
Pessoal de Nível Superior. Advisor: Jauvane Cavalcante de Oliveira.
T67) Mattheus da Hora França. EnCIMA um motor gráfico para o desenvolvimento de
ambientes multimídia colaborativos. 2009. Iniciação Científica. (Graduando em Ciência da
Computação) - Universidade Estadual de Santa Cruz, Conselho Nacional de
Desenvolvimento Científico e Tecnológico. Advisor: Jauvane Cavalcante de Oliveira.
T68) Marlan Kulberg. Desenvolvimento de uma Interface para o Phantom Omni. 2008.
Iniciação Científica - Laboratório Nacional de Computação Científica, Conselho Nacional de
Desenvolvimento Científico e Tecnológico. Advisor: Jauvane Cavalcante de Oliveira.
T69) Bruno Oliveira de Alcântara. Implementação de um módulo para o reconhecimento e
manipulação dos dispositivos de interação com o usuário. 2008. Iniciação Científica Laboratório Nacional de Computação Científica, Conselho Nacional de Desenvolvimento
Científico e Tecnológico. Advisor: Jauvane Cavalcante de Oliveira.
T70) Victor de Almeida Thomaz. IMPLEMENTAÇÃO DE UM MÓDULO PARA O
RECONHECIMENTO E MANIPULAÇÃO DO DISPOSITIVO DE INTERAÇÃO COM O
USUÁRIO CYBER GLOVE II. 2008. Iniciação Científica - Laboratório Nacional de
Computação Científica, Conselho Nacional de Desenvolvimento Científico e Tecnológico.
Advisor: Jauvane Cavalcante de Oliveira.
T71) Éllen dos Santos Correa. Ambientes Virtuais Colaborativos. 2008. Iniciação Científica.
(Graduando em Sistemas de Informação) - Fundação de Apoio à Escola Técnica do Estado
do Rio de Janeiro, Conselho Nacional de Desenvolvimento Científico e Tecnológico. Advisor:
Jauvane Cavalcante de Oliveira.
T72) Luiz Felipe do Amaral Marchese da Silva. Análise Arquitetural de Sistemas de
Teleatendimento Médico Emergencial. Finished: 2010. Undergraduate monograph
(Information Systems) - Universidade Estácio de Sá. Advisor: Antônio Tadeu Azevedo
Gomes.
T73) Emanuele Nunes de Lima Figueiredo Jorge. Aplicação em Telemedicina (título
provisório). Start: 2008. M.Sc. dissertation (Systems and Computing) - Instituto Militar de
Engenharia. Advisor: Artur Ziviani.
T74) Verano Costa Dutra. Análise da venda de medicamentos pela internet: o caso do Pramil
e Cytotec. Finished 2010. M.Sc. dissertation (Public Health) - Federal Fluminense University.
Advisor: Luciana Tricai Cavalini.
T75) Natália Silva de Oliveira. Desempenho dos prontuários eletrônicos e das prescrições de
sistemas computadorizados de apoio à decisão na pediatria: uma revisão sistemática. Start:
2009. M.Sc. dissertation (Public Health) - Federal Fluminense University. Advisor: Luciana
122
Tricai Cavalini.
T76) Otavio Coutinho Coelho da Silva. Elaboration of archetypes for a decision support
system in epidemiological surveillance. Finished: 2010. Undergraduate monograph (Public
Health) - Federal Fluminense University. Advisor: Luciana Tricai Cavalini.
T77) Erick Martins Ratamero. Modeling Reports Clinical Examination of Lumbar Spine
Magnetic Resonance Imaging Using Archetypes OpenEHR. Finished: 2010. Federal
Fluminense University. Advisor: Débora Cristina Muchaluat Saade; Co-advisor: Timothy
Wayne Cook.
T78) Marcus Vinicius de Almeida Ferreira. Classificação de Fluxos de Voz Baseada na
Importância da Chamada. Finished: 2009. M.Sc. dissertation (Telecommunications
Engineering) - Universidade Federal Fluminense. Advisor: Débora Christina Muchaluat
Saade.
T79) Cesar Henrique Pereira Ribeiro. Adaptação do Mecanismo de Controle de
Congestionamento TFRC do Protocolo de Transporte DCCP para Redes em Malha sem Fio.
Finished: 2009. M.Sc. dissertation (Telecommunications Engineering) - Universidade Federal
Fluminense. Advisor: Débora Christina Muchaluat Saade.
T80) Bruno Lima Wanderley. TC MESH: Uma Ferramenta de Gerência de Qualidade de
Serviço para Redes em Malha sem Fio. Finished: 2009. Master Thesis (Telecommunications
Engineering) - Universidade Federal Fluminense. Advisor: Débora Christina Muchaluat
Saade.
T81) Bruno de Avilla da Fonseca e Silva. Provisão de Qualidade de Serviço para o Sistema
AToMS em Redes em Malha sem Fio. Finished: 2010. Undergraduate monograph.
(Telecommunications Engineering) - Universidade Federal Fluminense. Advisor: Débora
Christina Muchaluat Saade.
T82) Lívia Gerk, Provisão de QOS em Redes em Malha Sem Fio Baseada no Padrão IEEE
802.11e. Start: 2009. M.Sc. dissertation (Telecommunications Engineering) - Universidade
Federal Fluminense, Advisor: Débora Christina Muchaluat Saade.
T83) Rafael Valle, Gerência de Redes em Malha sem Fio, Master Thesis
(Telecommunications Engineering) - Universidade Federal Fluminense, Advisor: Débora
Christina Muchaluat Saade.
T84) Jacques Alves da Silva. Tolerância a Falhas para Aplicações Autônomas em Grades
Computacionais. May 2010, D.Sc. Thesis - Universidade Federal Fluminense. Advison:
Eugene Francis Vinod Rebello.
T85) Matheus Bousquet Bandini. Qualidade de Serviço em Grades Computacionais
utilizando Acordo em Nível de Serviço. 2009, M.Sc. Dissertation - IME-RJ.
T86) Douglas Ericson Marcelino de Oliveira. Otimização de Aplicações de Visualização
Cientifica usando o QEF. 2010, M.Sc. Dissertation - IME-RJ.
T87) Thais Cabral de Mello. Ambiente para Criacao de Clusters Virtuais em Grids
Computacionais. 2010, M.Sc. dissertation - IME-RJ.
T88) Henrique de Medeiros Kloh. Modelo de Escalonamento de Workflows em Grids. 2010,
M.Sc. dissertation - IME-RJ.
123
T89) Henrique Bueno Rodrigues. GridSA: Uma Sociedade Autônoma. Dezembro 2009, M.Sc.
dissertation - Universidade Federal Fluminense, Advisor: Eugene Francis Vinod Rebello.
T90) Fernanda Gonçalves de Oliveira. Aplicações Autônomas para Computação em Larga
Escala. Abril 2010. M.Sc. Dissertation - Universidade Federal Fluminense. Advisor: Eugene
Francis Vinod Rebello.
T91) Ariel Alves Fonseca. Análise Experimental do Controle de Fluxo de Mensagens na
Execução Paralela de Aplicações em Grades Computacionais. Abril 2010. M.Sc. Dissertation
- Universidade Federal Fluminense. Advisor: Eugene Francis Vinod Rebello.
5.5) Scientific events organization
E1) Mini-symposium on Multi-Physics Multi-scale Computational Modeling of the
Cardiovascular System, In 1st International Conference on Mathematical and Computational
Biomedical Engineering -CMBE 2009-, Swansea, Wales, June 29-July 1, 2009. Organizers:
Raúl A. Feijóo and Pablo J. Blanco.
E2) MICCAI-Grid Workshop Medical imaging on GRID, HPC and GPU infrastructures:
achievements and perspectives. 2009. (Conference). Organizer: Marco Antonio Gutierrez.
E3) ENEBI 2009 – II. Encontro Nacional de Egenharia Biomecânica (II National Meeting on
Biomechanical Engineering), May 7-9th 2009, Praia dos Ingleses, Florianópolis, Brasil. – 110
participants, 50 oral presentations and 60 poster presentations (ABCM meeting).
E4) 17th International Conference on Systems, Signals and Image Processing IWSSIP 2010,
Novo Mundo Hotel, Rio de Janeiro, RJ Aura Conci – general chair, Débora Christina
Muchaluat Saade – organizing committee member. Date: from 17/06/2010 to 19/06/2010.
E5) SVR 2010 - XII Symposium on Virtual and Augmented Reality. Natal, RN, May 2010.
E6) SVR 2009 - X Symposium on Virtual and Augmented Reality, Porto Alegre, RS. May
2009.
E7) Workshop of the Rio de Janeiro Branch of the Brazilian Society of Health Information,
August 5, 6 and 7, 2009, 17:00-22:00
E8) Mini-course: “openEHR: an open specification for the development of Electronic Health
Records”. Laboratório Nacional de Computação Científica- LNCC, Petrópolis, January 18-22,
2010, 8 hours
E9) Mini-course: “openEHR: an open specification for the development of Electronic Health
Records”. Laboratório Nacional de Computação Científica- LNCC, Petrópolis, June 28-30,
2010, 5 hours.
E10) Expert panel in tuberculosis for archetypes design related to the syndromic diagnoses of
respiratory symptomatics. Organized by MLHIM Associated Laboratory, Fundo Global de
Combate à Tuberculose, and the “Health Information” Research Group of the Public Health
National School Sérgio Arouca, Oswaldo Cruz Foundation, held in June 18th, 2010.
E11) R.A.Feijóo, A.Ziviani, P.J. Blanco, G.C. Buscaglia, D. Guliato, G.A. Giraldi, J.C. de
Oliveira, S.R. dos Santos, A.T.A. Gomes, D.C.M. Saade, B. Schulze, M.C.S. Bôeres, 1st
Workshop on Scientific Computing in Health Applications (WSCHA) 2010, LNCC
124
E12) J.B.Broberg, B.Schulze, R.Buyya, 2nd International Symposium on Cloud Computing
(Cloud) 2010, 2010, Mebourne, Asutralia
E13) B.Schulze, M.Brunner, O.Cherkaoui, 1st IFIP/IEEE International Workshop on Cloud
Management (CloudMan) 2010, 2010, Osaka, Japan
E14) B.Schulze, J Myers, 7th International Workshop on Middleware for Grids, Clouds and eScience (MGC) 2009, Urbana-Champaign, EUA
E15) B.Schulze, A.R.Mury, 3rd International Workshop on Latin American Grid (LAGRID)
2009
E16) B.Schulze, J.N. de Souza, VII Workshop de Computação em Grade e Aplicações
(WCGA), 2009
5.6) Participation in conferences
For a comprehensive list of the conferences in which the members of the INCT-MACC have
participated please refer to the section Publications in conference proceedings. All the
events listed there have been attended by INCT-MACC members.
5.7) Software development
S1) HeMoLab: A virtual laboratory for computational modeling of the cardiovascular system
S2) ImageLab: Software for medical image processing and preprocessing in computational
modeling
S3) Imesh: Software for mesh generation based on images
S4) PyImageVis: software implemented in Python language, for image processing and
visualization of 3D images.
S5) CyberMed 2.0. Available at http://cybermed.sourceforge.net.
S6) ACAmPE: A Collaborative Environment for the Oil Industry
S7) ACOnTECe: A Collaborative Environment for Surgical Training.
S8) EnCIMA: An Engine for Collaborative and Immersive Multimedia Applications.
S9) AVIDHa: A Distributed Haptic Virtual Atlas
S10) Development of the demographic archetypes (for the identification of persons,
organizations) that are available in the repository of the openEHR foundation
(http://www.openehr.org/knowledge)
S11) First experiments for semi-automatic generation of graphical user interfaces based on
the archetypes using xslt or using the zope framework.
S12) Creation of a terminology service to access the openehr terminology and ICD.
S13) A prototype decision support system using archetypes through PyCLIPS, an extension
module for Python that embeds all the functionality of CLIPS (C Language Inference System)
S14) Implementation of the AToMS system version 1.3.1 and its deployment in the
HUCFF/UFRJ.
125
S15) Implementation of the openEHR specifications in Python using the Zope application
server and framework Grok.
5.8) Awards
A1) Daniel Reis Golbert, Odelar Leite Linhares Award to the M.Sc. dissertation entitled
“Modelos de Lattice-Boltzmann Aplicados à Simulação Computacional do Escoamento de
Fluídos Incompressíveis”, 2009. Associated Laboratory: HeMoLab.
A2) Marco Antonio Gutierrez & Team. First place in the category of Small and Médium
Business. Award: " the 100 + Innovative Using IT", promoted by "InformationWeek Brazil".
A3) Best Poster for Students at II Encontro Nacional de Engenharia Biomecânica
(ENEBI2009): Medeiros, CB, Carvalho, JM, Moraes, VM, Dallacosta, D, Bento, DA, Roesler,
CRM. Sistema de medição de temperatura sem fio para análise da geração de calor em
próteses articulares.
A4) Best paper in the WIE2009 - XV Workshop Sobre Informática na Escola - Paper "VIDA:
Atlas Anatômico 3D Interativo para Treinamento a Distância", Romero Tori, Fátima L. S.
Nunes, Victor H. P. Gomes, Daniel M. Tokunaga, Sociedade Brasileira de Computação.
A5) Best paper in the WIM2009 - IX Workshop de Informática Médica – category Full Paper Paper "Avaliação de uma luva de dados e um sistema virtual para aplicações de treinamento
médico", Bezerra, A.; Nunes, F. L. S.; Corrêa, C. G. Sociedade Brasileira de Computação.
A6) Best full paper award, XI Brazilian Symposium on Virtual and Augmented Reality,
SVR2009 – Paper “Using a Physically-based Camera to Control Travel in Virtual
Environments”, Bruno Marques Ferreira da Silva, Selan Rodrigues dos Santos, Sociedade
Brasileira de Computação.
A7) Iuri M. Teixeira and Rodolfo P. Viçoso. Work selected among the brazilian top 10 scientific
initiation projects in computing (CTIC 2009) Brazilian Computing Society (SBC).
A8) Alexandre Sena. Um Modelo Alternativo para Execução Eficiente de Aplicações Paralelas
MPI nas Grades Computacionais. D.Sc. Thesis, Doctoral Degree in Computer Science, best
thesis in Computer Architecture and High Performance Computing Contest (WSCAD-SCC
2009), SBC (Special Committee in Computer Architecture).
A9) Raúl Antonino Feijóo. IACM Fellows Award 2010. Concedido pela International
Association for Computational Mechanics (IACM).
126