I was able to attend the Radiological Society of North America’s (RSNA) 104th Scientific Assembly and Annual Meeting at McCormick Place in Chicago, IL, which occurred from November 25 to November 30, 2018. The annual meeting is a very large gathering of industry leaders in medical imaging, radiologists, and other related industry professionals. This was the 104th Scientific Assembly and Annual Meeting with the tagline: Tomorrow’s Radiology today. This year brought back much emphasis on machine learning and 3D printing. As usual there were many exhibitors with new medical imaging devices ready to discuss and provide demonstrations. In particular there were a few 1st time exhibitors I was excited to see including EMTensor and Butterfly Network. There was also a U.S. market debut by United Imaging Healthcare which had a large exhibitor space. As usual there were also numerous posters and presentations.

This year brought back the popular deep learning classroom presented by the NVIDIA Deep Learning Institute (DLI) designed for attendees to engage with deep learning tools, write algorithms and improve their understanding of deep learning technology. In one session, called Introduction to Deep Learning, attendees used convolutional neural networks (CNNs) along with a MedNIST data set that consists of 1,000 images each from 5 different categories: Chest X-ray, hand X-ray, Head CT, Chest CT, Abdomen CT, and Breast MRI. The task was for the attendees to identify the image type. Another session focused on 3D segmentation of Brain MR using deep learning methods for segmentation, particularly V-nets.

This year also brought back the machine learning showcase which allowed for the opportunity to network with nearly 80 companies on the forefront of the developments in machine learning and artificial intelligence. This year introduced a new showcase called the 3D printing & advanced visualization showcase which focused on groundbreaking technology in 3D printing, virtual reality and augmented reality. Another new feature this year was a Recruiters Row which allow for attendees to connect with organizations offering career opportunities. Like last year there was also a start-up showcase that featured emerging companies bringing innovations in medical imaging.

I was able to attend a few educational courses and scientific sessions. In particular I attended a session titled Image Processing in Imaging and Radiation Therapy and another session titled Deep Learning in Radiology: How Do We Do It? In the former session I was intrigued by the talk from ImBio which trained a CNN to create quality ventricle segmentations with only 43 scans in the training dataset and used data augmentations to improve the performance on the test dataset and another talk from researchers at the University of Chicago to classify chest radiographs as anteroposterior or posteroanterior. The latter session as indicated above I attended featured insights into deep learning in radiology at The Ohio State University, Stanford University, and the Mayo Clinic Rochester.

Below are some of the pictures I took while at the RSNA annual meeting in 2018, in Chicago, IL.






























I also attended last years RSNA annual meeting in 2017 which you can find more information and photos at here http://www.toddmccollough.com/radiological-society-north-america-rsna-chicago-il-2017-mccormick-place/.

I am pleased that a paper titled “A Time-Domain Measurement System for UWB Microwave Imaging” has been published in IEEE Transactions on Microwave Theory and Techniques, in 2018, that I am a co-author on through work with the Celadon Research Division of Ellumen Inc. This paper discusses a fully automatic time domain measurement system for microwave imaging using a pair of movable antennas to transmit and receive custom UWB pulse designs. The system described in the paper incorporates some elements from the Microwave Imaging Device patent previously discussed where a pair of movable antennas are independently controlled to rotate around a region of interest. This paper builds upon work previously presented in 2017, in IEEE Transactions on Microwave Theory and Techniques in the paper “A Phase Confocal Method for Near-Field Microwave Imaging” and at the IEEE AP-S Symposium on Antennas and Propagation and USNC-URSI Radio Science Meeting in the the poster presentation titled “Experimental Microwave Near-field Detection with Moveable Antennas.” The prior two works discussed using the system in the frequency domain with a vector network analyzer to generate and receive signals. In this new paper the time domain use of the system is described using an arbitrary waveform generator to generate signals and a digital phosphor oscilloscope to receive signals.

I have included an excerpt from the accepted version of the paper below. DOI: https://doi.org/10.1109/TMTT.2018.2801862 © 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.

microwave imaging time domain device

Figure 2 in the paper shows the system as was set up at Ellumen Inc. along with a PVC cylinder placed in the middle tray. A reconstructed image from data collected using the setup in Figure 2 using the delay multiply and sum (DMAS) imaging algorithm is shown in Figure 9. In Figure 10(a) the object was changed to a metallic object and a long wood square object both placed in the middle tray. A reconstructed image produced using DMAS is shown in Figure 10(b). Also note that the DMAS algorithm was programmed on eight nVidia Tesla GPUs which allowed images to be produced in under 1 minute. A comparison between the time domain system and frequency domain system was performed in the paper but is not included in the above excerpt. This analysis showed that both methods of data collection can allow for accurate reconstructed images to be obtained. The software to control the data collection was also updated as presented in this paper so that it takes 20 minutes to complete both incident and total field data collections. I encourage you to download and read the full “A Time-Domain Measurement System for UWB Microwave Imaging” paper from IEEE for full details and analysis.

I was able to attend the Radiological Society of North America (RSNA) meeting at McCormick Place in Chicago, IL, which occurred from November 26 to December 1, 2017. The annual meeting is a very large gathering of industry leaders in medical imaging, radiologists, and other related industry professionals. This was the 103rd Scientific Assembly and Annual Meeting with the tagline: Explore, Invent, Transform. This year the meeting was heavily focused on topics around machine learning, virtual reality, and 3D printing. Like always, there were lots of exhibitors with many new medical imaging devices ready to discuss and provide demonstrations. There were also interesting plenary sessions, educational courses, and scientific sessions. Furthermore, there were numerous posters and presentations.

A popular feature this year at RSNA, was a deep learning classroom presented by the NVIDIA Deep Learning Institute (DLI), designed for attendees to engage with machine learning tools, write algorithms, and improve their understanding of emerging machine learning technology. In one of these sessions, attendees trained a deep neural network to recognize handwritten digits. In another session, attendees trained convolutional neural networks (CNNs) to create biomarkers to identify the genomics of a disease without the use of an invasive biopsy. In yet another session, attendees segmented magnetic resonance imaging (MRI) images to measure parts of the heart.

Another feature this year was a separate section for machine learning showcase exhibitors. This section allowed those interested in machine learning to easily network with those in the field. This section featured a machine learning theatre with presentations from industry leaders. For example, in one presentation, Google Cloud talked about machine learning in imaging and how to build your own models on the cloud. In another presentation, Siemens Healthineers discussed artificial intelligence solutions for clinical decision making by turning medical images into biomarkers to help increase effectiveness of care. There was also a 3D printing theater with many posters and actual 3D printed parts nearby. In addition, there were several virtual reality demos setup to allow attendees to try themselves.

I was able to attend many interesting courses on machine learning, radiomics, 3D printing, virtual reality, and predictive analytics. For example, in one course I attended there was discussion of how to use KNIME to incorporate radiology data sources into predictive modeling and interpret the results and make visualizations. There was an interesting talk in another course I attended about using virtual reality in medical education and how it can greatly decrease the amount of time needed to teach students when compared to PowerPoint presentations. In yet another course I attended, instructors walked attendees through using Mimics and 3-matic from Materialise. In this course participants were taught how to segment out musculoskeletal, body, neurological, and vascular systems from DICOM files into a Standard Tessellation Language (STL) file for use with a 3D printer.

I was also able to attend the plenary session by Michio Kaku titled “The Next 20 Years: How science and technology will revolutionize business, the economy, jobs, and our way of life.” In the talk Dr. Kaku discussed the next wave of wealth generation in our modern economy which he believes is advancements at the molecular level including in artificial intelligence, nanotechnology, and biotechnology linked together by the cloud. He believes that information will be everywhere and computers will become like the word electricity today, where it is not mentioned in language as it is ubiquitous. Dr. Kaku recognized robots will replace jobs in the future but said robots are weak in three areas: 1) pattern recognition, 2) common sense, and 3) human interactions. Thus he believes in many cases artificial intelligent systems will aid humans and not replace them.

Below are some of the pictures I took while at the RSNA annual meeting in 2017, in Chicago, IL.

RSNA McCormick Place

RSNA 2017 Chicago McCormick Place

Welcome RSNA 2017

RSNA South Technical Exhibits

RSNA Learning Center

RSNA Posters

RSNA Cardiac Informatics Machine Learning

RSNA Deep Learning Classroom


RSNA Virtual Reality

RSNA 3D Printing in Medicine

RSNA 3D Printing Posters

RSNA 3D Imaging in Anatomic Pathology

RSNA 3D Printing Technology

RSNA 3D Printing Schedule

RSNA National Cancer Institute Cancer Imaging Archive

RSNA QIRR Meet the Experts

RSNA Rontgen Reimagined

RSNA Welcome 2017

RSNA Booth Sitting

RSNA Canon Toshiba Medical

RSNA Toshiba

RSNA Machine Learning Google Cloud

RSNA Carestream

RSNA ziehm imaging



RSNA General Electric GE

RSNA Samsung


RSNA Konica Minolta

RSNA Bayer Angiography

RSNA Siemens Healthineers

RSNA Elsevier

RSNA Philips


RSNA Next 20 Years

RSNA Michio Kaku

RSNA Tours and Events

RSNA Technical Exhibit Map South

RSNA Technical Exhibit Map North

RSNA 2017 Chicago

RSNA Corporate Partners 2017

RSNA waterfront Chicago

RSNA 2017 Chicago Landscape