Join JAAGNet and Group

SIgn up for JAAGNet & the MedTech Group its FREE!!


Member Benefits:


Again signing up for JAAGNet & Group Membership is FREE and will only take a few moments!

Here are some of the benefits of Signing Up:

  • Ability to join and comment on all the JAAGNet Domain communities.
  • Ability to Blog on all the Domain communities 
  • Visibility to more pages and content at a group community level, such as Community, Internet, Social and Team Domain Community Feeds.
  • Make this your only content hub and distribute your blogs to LinkedIn, Reddit, Facebook, Twitter, WhatsApp, Messenger, Xing, Skype, WordPress Blogs, Pinterest, Email Apps and many, many more (100+) social network and feed sites. 
  • Opportunity to collaborate (soon to be  released) with various JAAGNet Business communities and other JAAGNet Network members.
  • Connect (become friends), Follow (and be Followed) and Network with JAAGNet members with similar interests.
  • Your Content will automatically be distributed on Domain and JAAGNet Community Feeds. Which are widely distributed by the JAAGNet team.

Join Us!

All Posts (145)

Sort by
Platinum Level Contributor

Thankful for Biopharma Breakthroughs

JAAGNet Comment:

We believe during these tough times, people and companies step up and understand the urgent need to go above and beyond. Although its been a pretty tough year with not a lot (if any) positive news there has been a lot of  people and bsuinesses working really hard to tackle  Covid-19, whether it has been in the area of theraputics and/or vaccines. The following article is a great summary of the breakthoughs that have been made and it shows us hope and promise that we can knock down the impacts this virus could have had on our global society.  Peter


For so many, 2020 has been a bleak year filled with uncertainty and anxiety directly related to the COVID-19 pandemic that has surged across the globe and led to the deaths of more than 1.4 million people, including close to 260,000 in the United States.

Despite the constant need for social distancing, mask-wearing, and the isolation and economic uncertainty that resulted from the outbreak, there is still much to be thankful for when families gather around a virtual table to break bread and carve the turkey this year. And one uniting bit of thankfulness the global community can share in is the prowess of the international pharmaceutical industry displayed to address COVID-19. Following the outbreak that originated in China, then spread across Asia and into Europe, the pharmaceutical industry pivoted on a dime to tackle the global threat. Ongoing research was put on the backburner and scientists began to focus on understanding the virus and assessing what medications could be used against it. The virus was also sequenced and hundreds of vaccine projects were initiated. The industry, along with scientists from various government agencies and academic institutions joined together in a united front against the global pandemic.

And those efforts are now beginning to pay off. In Russia and China, vaccines are already being distributed to front-line workers and manufacturing is ramping up for broader distribution. In the west, we are just weeks away from seeing the first coronavirus vaccine receive Emergency Use Authorization. The mRNA vaccine candidate developed by Pfizer and Germany-based BioNTech demonstrated 95% efficacy in clinical trials. The U.S. Food and Drug Administration (FDA) will review the data on Dec. 10.

When that medication is greenlit (as it most likely will be), the limited number of vaccines currently available will roll out within 24 hours and inoculation will begin. Fortunately, more vaccines will likely see approval in the United States and Europe, which means more people will receive some protection against the virus. Moderna reported vaccine efficacy of 94.5% and earlier this week, AstraZeneca also announced 90% efficacy from its vaccine candidate. Novavax and Johnson & Johnson are expected to release data soon, as will Merck and other companies.

The vaccine approvals are the proverbial light at the end of the tunnel that is COVID-19. High rates of inoculation will lead to herd immunity against the virus and that is something for which to be thankful.

But, it’s not just vaccines that have been developed for COVID-19. The FDA recently approved two antibody treatments for the virus, Eli Lilly’s bamlanivimab and Regeneron’s REGN-COV2, which had previously been used to treat the COVID-19 diagnosis of President Donald Trump. Both of the antibody treatments do have limits for their use. They are not meant for COVID-19 patients who require supplemental oxygen or are on ventilators.

Gilead Sciences' remdesivir broke through as the first COVID-19 drug to receive full approval from the FDA as a medication that can shorten the time of infection for infected patients. Despite its approval, Remdesivir has received a rocky reception, with the World Health Organization recommending against its use due to limited capabilities. Other drugs have also received similar receptions over the course of the pandemic. While remdesivir has clinical data supporting its approval, other medications such as hydroxychloroquine have only anecdotal data backing up any efficacy against the virus. Still, those COVID-19 patients who have benefited from the treatments are surely thankful for any edge against the virus they received.

COVID-19 has certainly dominated our landscape over the past nine months, but other illnesses continue to negatively impact the human condition. COVID research has been a primary focus, but that has not put a halt to the development of treatments for other diseases, including rare diseases.

This week, Alnylam won approval for Oxlumo (lumasiran), the first drug approved by the FDA for primary hyperoxaluria type 1, an ultra-rare genetic disease that causes deposits of calcium oxalate crystals to form in the kidneys and urinary tract, which can lead to painful and recurrent kidney stones, nephrocalcinosis, progression to kidney failure and system organ dysfunction. Also this week, the FDA approved Eiger PharmaceuticalsZokinvy, the first drug approved to treat Hutchinson-Gilford Progeria Syndrome and processing-deficient Progeroid Laminopathies. The two genetic diseases cause premature, rapid aging that dramatically decreases the lifespan of children affected. In June, Novartis became the first company to the finish line with a treatment for Adult-Onset Still’s Disease (AOSD), a rare auto-inflammatory disease of unknown origin. Ilaris (canakinumab), was previously approved for Systemic Juvenile Idiopathic Arthritis (SJIA) in patients aged 2 years and older.

These approvals and others not mentioned that improve quality of life and stave off premature death are all things the pharmaceutical industry and its countless, dedicated employees have provided for which we should be thankful.

Originally Published: Nov 26, 2020 By Alex Keown BioSpace

Original article can be found here

Read more…
Bronze Level Contributor

When put up against five experienced, fellowship-trained radiologists, DeepCOVID-XR was able to process a set of 300 test X-rays in about 18 minutes compared to about two and a half to three and a half hours. (Getty Images)

Researchers at Northwestern University have trained an artificial intelligence algorithm to automatically detect the signs of COVID-19 on a basic X-ray of the lungs, and it’s capable of outperforming a team of specialized readers.

The developers said the AI could be used to rapidly screen patients upon admission to a hospital, especially for reasons unrelated to coronavirus symptoms, and trigger protocols to help protect healthcare workers.

“It could take hours or days to receive results from a COVID-19 test,” said Ramsey Wehbe, a cardiologist and postdoctoral fellow in AI at the Northwestern Medicine Bluhm Cardiovascular Institute. “AI doesn’t confirm whether or not someone has the virus. But if we can flag a patient with this algorithm, we could speed up triage before the test results come back.”

Called DeepCOVID-XR, the machine learning program was able to spot COVID-19 in X-rays about 10 times faster than thoracic radiologists and 1% to 6% more accurately.

“We are not aiming to replace actual testing,” said Aggelos Katsaggelos, the Joseph Cummings Professor of Electrical and Computer Engineering at Northwestern and senior author of the team’s study published in the journal Radiology. “X-rays are routine, safe and inexpensive. It would take seconds for our system to screen a patient and determine if that patient needs to be isolated.” 

Trained and tested on a data set of more than 17,000 X-ray images, the algorithm identified patterns in patients with COVID-19: Instead of a clear scan, their lungs appeared patchy and hazy as air sacs became inflamed and filled with fluid instead of air.

These are similar to cases of pneumonia, heart failure or other pulmonary conditions, but the AI was able to tell the difference and spot the contagious disease. Still, there’s a limit to radiologic diagnosis, as not all carriers of COVID-19 may show signs of illness, especially during the early stages of an infection.

“In those cases, the AI system will not flag the patient as positive,” said Wehbe. “But neither would a radiologist.”


Chest X-rays and AI overlays provided by DeepCOVID-XR (Northwestern University) 

When put up against five experienced, fellowship-trained radiologists, DeepCOVID-XR was able to process a set of 300 test X-rays in about 18 minutes, compared to about two and a half to three and a half hours. The AI also delivered an accuracy rate of 82%, about on par with the group’s range of 76% to 81%.

Additionally, the researchers have made the algorithm publicly available, allowing others to train it with new data, with the goal of eventually getting the program into the clinic.

“Radiologists are expensive and not always available,” Katsaggelos said. “X-rays are inexpensive and already a common element of routine care. This could potentially save money and time—especially because timing is so critical when working with COVID-19.”

Originally published by
Conor Hale | November 25, 2020
Fierce Biotech


Read more…
Gold Level Contributor

Jacob Bell / BioPharma Dive

The latest batch of breakthrough device designations from FDA support an array of medtech innovations, from a novel treatment for sleep apnea to a tissue regeneration technology designed to aid spinal cord injury patients. Several technologies designated within the past month are diagnostics, with two targeting breast cancer and one designed to improve the diagnosis of a deadly gastrointestinal condition in premature infants. 

FDA's Breakthrough Devices Program aims to speed development and review of technology that could offer a better treatment for life-threatening or debilitating disease.

D Path last week said it received a breakthrough designation for a computer-aided diagnostic platform that uses digitized histopathology images to better determine breast cancer characteristics such as invasiveness and grades. The software-as-a-medical-device platform is designed to make clinical grade predictions from breast biopsy and resection images to improve diagnostic accuracy.

According to the Newton, Massachusetts-based company, the device reduces the error rate on biopsies obtained before surgery from 20% to less than 5%. The technology incorporates statistical physics and tumor biology to identify digital cancer biomarkers, aiding in treatment selection.

On the same day that 4D Path announced its breakthrough device designation, Lumicell, a fellow Newton medtech, said it received FDA's drug center's fast-track designation for its LUM imaging system to detect and remove cancerous tissue in the treatment of breast cancer. Lumicell said it received the special status with rolling review by FDA, augmenting its previously granted breakthrough device designation for breast cancer and all solid tumors.

The system allows surgeons to see and remove residual cancer in real-time, focusing on the cells left behind in the surgical cavity rather than on the lumpectomy specimen, with the aim to reduce the risk of second surgeries and cancer recurrence. Lumicell said it is continuing enrollment in its breast cancer pivotal trial and, with rolling review, will be able to submit modules for a New Drug Application with FDA as they are ready.

Also in mid-November, Louisiana State University announced that a technology to diagnose necrotizing enterocolitis, an often fatal condition in premature infants, gained a breakthrough device designation. Called NECDetect and invented by professor Sunyoung Kim, the noninvasive biomarker test is performed on stool samples. There is no clinical test that has been established as the gold standard to diagnose NEC. The new test identifies 93% true positives and 95% true negatives, according to LSU. Kim has started a spinout company to further develop and commercialize the product.

Originally published by
Susan Kelly | November 24, 2020
Medtech Dive

Read more…
Gold Level Contributor

Research efforts in Florida will be a model for how Berg and AdventHealth roll out similar work throughout the U.S. The Berg artificial intelligene platform can provide info on why conditions such as obesity and diabetes leave people more vulnerable to COVID-19. (AdventHealth)

AdventHealth has partnered with biotech firm Berg to gain insights on people that have tested positive for COVID-19 and reduce mortality rates from the disease.

AdventHealth, a nonprofit health system based in Orlando, Florida, has diagnosed and treated more than 25,000 patients with COVID-19 to date. With more than 250,000 Americans having died during the COVID-19 pandemic, a key reason for Berg to work with AdventHealth is to understand COVID-19 better and also help triage patients suffering from the virus, explained Niven Narain, Ph.D., co-founder, president and CEO of Berg.

Under the agreement announced Monday, the two organizations will use Berg’s proprietary artificial-intelligence-enabled Interrogative Biology platform with AdventHealth’s patient data. Narain explained that the platform processes biological patient samples on the front end and feeds that into a back-end AI analytical platform. It incorporates machine learning as well as a type of AI called a Bayesian network. With ML, data scientists generate data insights from a hypothesis, but with Bayesian AI the data generate the hypothesis. You then validate the hypothesis in the laboratory and with clinical records, Narain told Fierce Healthcare.

AdventHealth and Berg will build a patient registry biobank that will allow data scientists to find the best treatments for patients with COVID-19. The biobank will incorporate data on all patients that have undergone COVID-10 tests at AdventHealth. Data scientists will study the length of hospital or ICU stays and which medications the health system administered as well as personal medical history and patient outcomes.

Research efforts in Florida will be a model for how Berg and AdventHealth roll out similar work throughout the U.S. The Berg AI platform can provide info on why conditions such as obesity and diabetes leave people more vulnerable to COVID-19, according to Steven Smith, M.D., senior vice president and chief scientific officer at AdventHealth.

Growing an existing relationship

AdventHealth had already been using Berg’s technology to boost outcomes and develop precision medicine for patients with nonalcoholic fatty liver disease (NAFLD) and sarcopenia, which is a reduction in skeletal mass due to aging. Data scientists from Berg and AdventHealth will collaborate in a similar way with a focus on the COVID-19 pandemic.

“We already had a relationship, and this just allowed us to magnify and grow that,” Smith told Fierce Healthcare.

While the work on NAFLD and sarcopenia was primarily focused on drug discovery and development, for the COVID-19 research, Berg and AdventHealth have developed a data lake from which they have pulled information on positive COVID-19 cases, Smith explained.

The biobank patient registry will launch in two phases: In the first phase, the organizations will release patient demographics, COVID-19 clinical information and patient medical histories. In the second phase, the organizations will incorporate data from across AdventHealth locations in multiple U.S. states. They will also analyze how chronically administered medications could be linked with a better outcome or lower probability of SARS-CoV-2 infection, the strain of coronavirus responsible for COVID-19.

Gaining insights from AI

A desired result of the research is to generating risk engines and understand how to triage patients better, Smith noted.

“There are a few risk engines out there in the literature,” Smith said. “They perform OK—they're not great—so there's certainly the potential for advancement in that way.”

Smith’s team at AdventHealth will receive updates from Berg on the data approximately every 30 days as the second wave of the COVID-19 crisis gets underway, according to Narain.

“This is what makes this so important because the next few months are going to be presumably very difficult, and our goal of this project is to try to deliver insights and answers while this is all going on,” Narain said. “As quickly as we get it, we’ll be feeding it into the system and every 30 days or so update the insight.”

The AI data will inform doctors regarding which medications, like remdesivir and dexamethasone, work on certain populations, Narain explained.  

“It will bring together the efficiency among physicians, patients, and drug developers so that the ecosystem of information sharing becomes easier,” Narain said.

Smith added that the advantage of AI is to be able to provide key insights in a way researchers didn’t anticipate.

“The broad idea of AI layered on top of rich data is to be able to break those chains so to speak around how we think about particular illnesses and flip that upside down,” Smith said.

Originally published by
Brian T. Horowitz | 
Nov 23, 2020
Fierce Healthcare


Read more…
Gold Level Contributor
Researchers at MIT and the Indian Institute of Technology have come up with a way to generate the steam required by autoclaves using just the power of sunlight to help maintain safe, sterile equipment at low cost in remote locations.     Image: Courtesy of the researchers. Edited by MIT News.

Autoclaves, the devices used to sterilize medical tools in hospitals, clinics, and doctors’ and dentists’ offices, require a steady supply of pressurized steam at a temperature of about 125 degrees Celsius. This is usually provided by electrical or fuel-powered boilers, but in many rural areas, especially in the developing world, power can be unreliable or unavailable, and fuel is expensive.

Now, a team of researchers at MIT and the Indian Institute of Technology has come up with a way to generate the needed steam passively, using just the power of sunlight, with no need for fuel or electricity. The device, which would require a solar collector of about 2 square meters (or yards) to power a typical small-clinic autoclave, could maintain safe, sterile equipment at low cost in remote locations. A prototype was successfully tested in Mumbai, India.

The system is described today in the journal Joule, in a paper by MIT graduate student Lin Zhao, MIT Professor Evelyn Wang, MIT Professor Gang Chen, and 10 others at MIT and IIT Bombay.

The key to the new system is the use of optically transparent aerogel, a material developed over the last few years by Wang and her collaborators. The material is essentially a lightweight foam made of silica, the material of beach sand, and consists mostly of air. Light as it is, the material provides effective thermal insulation, reducing the rate of heat loss by tenfold.

This transparent insulating material is bonded onto the top of what is essentially off-the-shelf equipment for producing solar hot water, which consists of a copper plate with a heat-absorbing black coating, bonded to a set of pipes on the underside. As the sun heats the plate, water flowing through the pipes underneath picks up that heat. But with the addition of the transparent insulating layer on top, plus polished aluminum mirrors on each side of the plate to direct extra sunlight at the plate, the system can generate high-temperature steam instead of just hot water. The system uses gravity to feed water from a tank into the plate; the steam then rises to the top of the enclosure and is fed out through another pipe, which carries the pressurized steam to the autoclave. A steady supply of steam must be maintained for 30 minutes to achieve proper sterilization.

Since much of the developing world faces limited availability of reliable electricity or affordable fuel, “we saw this as an opportunity to think about how we can potentially create a low-cost, passive, solar-driven system to generate steam, at the conditions that are necessary for autoclaving or for medical sterilization,” explains Wang, who is the Gail E. Kendall Professor of Mechanical Engineering and head of the mechanical engineering department.

Being able to test the system in Mumbai was a bonus, she says, because of the city’s “relevance and importance” as the type of location that might benefit from such low-cost steam-generation equipment.

In the Mumbai tests, even though the sky was hazy and cloudy, providing only 70 percent insolation compared to a sunny day, the device succeeded in producing the saturated steam needed for sterilization for the required half hour period.

The test was carried out with a small-scale unit, only about a quarter of a square meter, about the size of a hand towel, but it showed that the steam production rate was sufficient that a similar unit of somewhere between 1 and 3 square meters would be sufficient to power a benchtop autoclave of the kind typically used in a doctor’s office, Zhao says.

The main limiting factor for practical deployment of such devices is the availability of the aerogel material. One company, founded by Elise Strobach PhD ’20, who is a co-author of this paper, is already attempting to scale up the production of transparent aerogel, for use in high thermal efficiency windows. But so far the material is only produced in small amounts using relatively expensive laboratory-grade supercritical drying equipment, so widespread adoption of such a sterilization system is likely still a few years off, the researchers say.

Since the other components, except for the aerogel itself, are already widely available at low cost throughout the developing world, fabrication and maintenance of such systems may ultimately be practical in the areas where they would be used. The parts needed for the quarter-square-meter prototype came to less than $40, Zhao says, so a system sufficient for a typical small autoclave would be likely to cost $160 or so, once the necessary aerogel material becomes commercialized. “If we can get the supply of aerogel, the whole thing can be built locally, with local suppliers,” he says.

The process could also be used for a variety of other purposes, the team says. For example, many food and beverage processing systems rely on high-temperature steam, which is typically provided by fossil-fuel powered boilers. Passive solar-powered systems to deliver that steam would eliminate the fuel costs, and so could be an attractive option in many industries, they say.

Ultimately, such systems should be much more cost-effective than systems that concentrate sunlight by tenfold or more to generate steam, because those require expensive mirrors and mountings, as opposed to the simplicity of this aerogel-based approach.

“This is a significant advance,” says Ravi Prasher, a professor of mechanical engineering at the University of California at Berkeley and an associate director at Lawrence Berkeley National Laboratory, who was not involved in this work. “Generating high-temperature steam with high energy efficiency has been a challenge. Here the authors have achieved both.”

“The quality of the research is very high,” Prasher adds. “Access to passive sterilization techniques for low-income communities who do not have access to reliable electricity is a big deal. Therefore, the passive solar device developed by the MIT team is very significant in that regard.”

Originally published by
David L Chandler | MIT News Office | November 18, 2020

The research team also included Bikram Bhatia, Lenan Zhang, Arny Leroy, Sungwoo Yang, Thomas Cooper, and Lee Weinstein at MIT, and Manoj Yadav, Anish Modi, and Shireesh Kedsare at IIT Bombay. The work received support from the Tata Center at MIT and from the U.S. Department of Energy.

Read more…
Silver Level Contributor

The Internet of Medical Things (IoMT), including wearable personal fitness trainers, are proving to be a reliable measure of physical activity and for risk assessment than traditional methods. (Credit: Getty Images) 

The Internet of Medical Things (IoMT) market is expanding rapidly, with over 500,000 medical technologies currently available, from blood pressure and glucose monitors to MRI scanners. AI poised to contribute analysis crucial to innovations such as smart hospitals.   

Today’s internet-connected devices aim to improve efficiencies, lower care costs and drive better outcomes in healthcare, according to a recent account in HealthTech Magazine. Devices in the IoMT domain extend to wearable external medical devices such as skin patches and insulin pumps; implanted medical devices such as pacemakers and cardioverter defibrillators; and stationary devices such as for home monitoring and connecting imaging machines.   

Projections for IoMT market size were aggressive before the COVID-19 pandemic hit, with Deloitte sizing the market at $158.1 billion by 2022, with the connected medical device segment expected to take up to $52.2 billion of that by 2022. 

Now the estimates are growing. The global IoMT market was valued at $44.5 billion in 2018 and is expected to grow to $254.2 billion in 2026, according to AllTheResearch. The smart wearable device segment of IoMT, inclusive of smartwatches and sensor-laden smart shirts, made up for the largest share of the global market in 2018, at roughly 27 percent, the report found.  

This area of IoMT is poised for even further growth as artificial intelligence is integrated into connected devices and can prove capable of real-time, remote measurement and analysis of patient data. 

Fitbit Trackers Found to Help Patients with Heart Disease 

Evidence is coming in on the effectiveness of IoMT for health care. A study conducted by researchers from Cedars-Sinai Medical Center and UCLA found that Fitbit activity trackers were able to more accurately evaluate patients with ischemic heart disease by recording their heart rate and accelerometer data simultaneously. Some 88% of healthcare providers were found in a survey last year of 100 health IT leaders by Spyglass Consulting Group, to be investing in remote patient monitoring (RPM) equipment. This is especially true for patients whose conditions are considered unstable and at risk for hospital admission. 

Cost avoidance was the primary investment driver for RPM solutions, which are hoping to achieve reduced hospital readmissions, emergency department visits, and overall healthcare utilization, the study stated. 

Wearable activity trackers have also proven to be a more reliable measure of physical activity and assessing five-year risk than traditional methods, according to a study by Johns Hopkins Medicine, as reported in mHealthIntelligence.  

Adult participants between 50 and 85 years old wore an accelerator device at the hip for seven consecutive days to gather information on their physical activity. Individual data came from responses to demographic, socioeconomic, and health-related survey questions, along with medical records and clinical laboratory test results.  

IoMT Devices Seen as Helping to Control Health Care Costs  

Medical cost reductions of $300 billion are being estimated by Goldman Sachs, through remote patient monitoring and increased oversight of medication use. Startup activity is picking up. Proteus Discover, for example, has focused its smart pill capabilities on measuring the effectiveness of medication treatment; and HQ’s CorTemp is using its smart pills to monitor patients’ internal health and transmit wireless data such as core temperatures, which can be critical in life or death situations. 

AI systems are seen as able to reduce therapeutic and therapeutic errors in human clinical practice, according to an account in IDST. Developing IoMT strategies that match sophisticated sensors with AI-backed analytics will be critical for developing smart hospitals of the future. “Sensors, AI and big data analytics are vital technologies for IoMT as they provide multiple benefits to patients and facilities alike,” stated Varun Babu, senior research analyst with Frost & Sullivan TechVision Research, which studies emerging technology for IT. 

The rise of AI and its alliance with IoT is one of the critical aspects of the digital transformation in modern healthcare, according to an account in IoTforAll. The central pairing is likely to result in speeding up the complicated procedures and data functionalities that are otherwise tedious and time-consuming. AI along with sensor technologies from IoT can lead to better decision-making. Advances in connectivity through AI are expected to promote an understanding of therapy and enable preventive care that promises a better future. 

The impact of AI on personal healthcare is attracting wide comment. “AI is transforming every industry in which it is implemented, with its impact upon the healthcare sector already saving lives and improving medical diagnoses,” stated Dr. Ian Roberts, Director of Therapeutic Technology at Healx, a biotechnology company based in Cambridge, England, in an account in BBH (Building Better Healthcare). “The transformative effect of AI is set to switch healthcare on its head, as the technology leads to a shift from reactive treatments targeting populations to proactive prevention tailored to the individual patient.”  

In the future, AI-generated healthcare recommendations are seen as extending to include personalized treatment plans. “Currently we are in the infancy of AI in healthcare, and each company drives forward another piece of the puzzle and once fully integrated the future of medicine will be forever transformed,” Dr. Roberts stated.   

However, the increasingly-connected environment of IoMT is seen as bringing new risks as cyber criminals seek to exploit device and network vulnerabilities to wreak havoc. A recent global survey by Extreme Networks, a network infrastructure provider, found that one in five healthcare IT professionals are unsure if every medical device on their network has all the latest software patches installed — creating a porous security infrastructure that could potentially be bypassed. 

“2020 will be the year when healthcare organizations of all sizes will need to realize that they are easy pickings for cyber criminals, and put a robust, reliable and resilient network security infrastructure in place to protect themselves adequately,” stated Bob Zemke, director of healthcare solutions for Extreme.  

Data science is seen as leading to more precise analytics. “In 2020, we can expect to see better patient outcomes fueled largely by the growing prevalence of data science and analytics,” stated lan Jacobson, chief data and analytic officer at Alteryx, a software company providing advanced analytics tools. “Much of the data that is required to solve some really-key challenges already exists in the public domain, and in the next year we expect more and more healthcare organizations will implement tools that help to assess this rich information as well as gain actionable insight.” The tools are seen as being effective in monitoring proper use of prescription drugs.   

Originally published by
AI Trends Staff | November 12, 2020
AI Trends

Read the source articles and information in HealthTech MagazineDeloitteAllTheResearchmHealthIntelligenceIDSTIoTforAll and in BBH (Building Better Healthcare). 

Read more…
Bronze Level Contributor

A team of researchers from Kaunas University of Technology and Lithuanian University of Health Sciences proposed a non-invasive method for detection of melanoma. A patented computer-aided diagnostic system developed by Lithuanian scientists proved to be more than 90% accurate in detecting malignancy in diagnostic images of skin lesions acquired from 100 patients.

In Europe, melanoma is the fifth most common type of cancer and is the major cause of death from skin cancer. Northern Europe displays the largest age-standardised rate mortality of 3.8 per 10,000 in the region, with an incidence of 23.4.

Excision of a primary tumour remains essential in diagnosing melanoma, and the decision for the operation is generally based on the dermatoscopic evaluation of the lesion. However, the accuracy of melanoma clinical diagnosis is only at 65% and strongly relies on the experience of the physician-dermatologist carrying out the analysis.

“The novelty of our method is that it combines diagnostic information obtained from different non-invasive imaging technologies such as optical spectrophotometry and ultrasound. Based on the results of our research, we can confirm that the developed automated system can complement the non-invasive diagnostic methods currently applied in the medical practice by efficiently differentiating melanoma from a melanocytic mole”, says Professor Renaldas Raišutis, head of the team behind the research at Kaunas University of Technology.

In the study, carried out by the researchers of two Lithuanian universities, diagnostic images of skin lesions acquired from 100 different patients were evaluated. By comparing and analysing complex data on skin tumours recorded by different techniques, the researchers were able to develop a novel diagnostic system, differentiating melanoma from a benign nevus with accuracy higher than 90%.

“An efficient diagnosis of an early-stage malignant skin tumour could save critical time: more patients could be examined and more of them could be saved”, says Prof Raišutis.

According to Prof Raišutis, the novel diagnostic system is firstly aimed at medical professionals; he estimates that the price of the final product might be affordable even for smaller medical institutions. However, the team is also thinking about the solutions for individual use at home.

Following the research, the prototype of the technology was developed, and the clinical research is being carried out complying with the requirements of the protocol for the clinical research confirmed by the regional bioethics committee.

The invention was patented in Lithuania (Patent No. LT6670B) and the patent applications filed for Patent Cooperation Treaty and the United States Patent and Trademark Office.

Originally published by
Kaunas University of Technology | November 9, 2020

original article

The above-described method and technology were developed within the framework of the project “SkinImageFusion: Complex Analysis Method of Spectrophotometric and Ultrasound Data for the Automatic Differential Diagnosis of Early Stage Malignant Skin Tumours”, funded by Lithuanian Research Council and carried out by a joint team of Kaunas University of Technology and Lithuanian University of Health Sciences (the latter headed by Professor Skaidra Valiukevičienė) in 2017–2020. Three doctoral dissertations were defended on the basis of this research.

The scientific publication “Diagnostics of Melanocytic Skin Tumours by a Combination of Ultrasonic, Dermatoscopic and Spectrophotometric Image Parameters” can be accessed here.

Read more…
Gold Level Contributor

Stanford engineers have created a microlab half the size of a credit card that can detect COVID-19 in just 30 minutes. (Image credit: Getty Images)

Throughout the pandemic, infectious disease experts and frontline medical workers have asked for a faster, cheaper and more reliable COVID-19 test. Now, leveraging the so-called “lab on a chip” technology and the cutting-edge genetic editing technique known as CRISPR, researchers at Stanford have created a highly automated device that can identify the presence of the novel coronavirus in just a half-hour.

“The microlab is a microfluidic chip just half the size of a credit card containing a complex network of channels smaller than the width of a human hair,” said the study’s senior author, Juan G. Santiago, the Charles Lee Powell Foundation Professor of mechanical engineering at Stanford and an expert in microfluidics, a field devoted to controlling fluids and molecules at the microscale using chips.

The new COVID-19 test is detailed in a study published on Nov. 4 in the journal Proceedings of the National Academy of Sciences. “Our test can identify an active infection relatively quickly and cheaply. It’s also not reliant on antibodies like many tests, which only indicates if someone has had the disease, and not whether they are currently infected and therefore contagious,” explained Ashwin Ramachandran, a Stanford graduate student and the study’s first author.

The microlab test takes advantage of the fact that coronaviruses like SARS-COV-2, the virus that causes COVID-19, leaves behind tiny genetic fingerprints wherever they in the form of strands of RNA, the genetic precursor of DNA. If the coronavirus’s RNA is present in a swab sample, the person from whom the sample was taken is infected.

To initiate a test, liquid from a nasal swab sample is dropped into the microlab, which uses electric fields to extract and purify any nucleic acids like RNA that it might contain. The purified RNA is then converted into DNA and then replicated many times over using a technique known as isothermal amplification.

Next, the team used an enzyme called CRISPR-Cas12 – a sibling of the CRISPR-Cas9 enzyme associated with this year’s Nobel Prize in Chemistry – to determine if any of the amplified DNA came from the coronavirus.

If so, the activated enzyme triggers fluorescent probes that cause the sample to glow. Here also, electric fields play a crucial role by helping concentrate all of the important ingredients – the target DNA, the CRISPR enzyme and the fluorescent probes – together into a tiny space smaller than the width of a human hair, dramatically increasing the chances they will interact.

“Our chip is unique in that it uses electric fields to both purify nucleic acids from the sample and to speed up chemical reactions that let us know they are present,” Santiago said.

The team created its device on a shoestring budget of about $5,000. For now, the DNA amplification step must be performed outside of the chip, but Santiago expects that within months his lab will integrate all the steps into a single chip.

Several human-scale diagnostic tests use similar gene amplification and enzyme techniques, but they are slower and more expensive than the new test, which provides results in just 30 minutes. Other tests can require more manual steps and can take several hours.

The researchers say their approach is not specific to COVID-19 and could be adapted to detect the presence of other harmful microbes, such as E. coli in food or water samples, or tuberculosis and other diseases in the blood.

“If we want to look for a different disease, we simply design the appropriate nucleic acid sequence on a computer and send it over email to a commercial maker of synthetic RNA. They mail back a vial with the molecule that completely reconfigures our assay for a new disease,” Ramachandran said.

The researchers are working with the Ford Motor Company to further integrate the steps and develop their prototype into a marketable product.

Originally published by
Andrew Myers | November 4, 2020
Stanford News | Stanford University

Original article

Santiago is a member of Stanford Bio-X and a faculty fellow in ChEM-H. Ramachandran is a Bio-X graduate student fellow. Additional Stanford contributors include doctoral scholar Diego A. Huyke, postdoctoral scholar Eesha Sharma, research scientist Malaya K. Sahoo, research scientist ChunHong Huang of the Department of Pathology, Professor of Pathology Niaz Banaei, Professor of Pathology Benjamin A. Pinsky.

This research received financial support from the Stanford Chemistry Engineering & Medicine for Human Health (ChEM-H) program and from Ford Motor Company.


Read more…
Silver Level Contributor

Spun out of the National University of Singapore, Breathonix said its breathalyzer system could be used to screen high-traffic areas for COVID-19, including in airports, hotels, sports venues and transportation hubs. (Pixabay)

Singapore’s Breathonix has said a clinical trial of its COVID-19 breathalyzer test was able to achieve at least 90% accuracy after screening participants on-site for 60 seconds.

The company’s test uses mass spectrometry to analyze the thousands of volatile organic compounds that people exhale with every breath, to establish a specific signal among those with an active coronavirus infection.

Using a machine learning algorithm, this generates a “bio-fingerprint of COVID-19,” said co-founder and CEO Jia Zhunan. "Based on more than six years of research at the National University of Singapore, we have developed [a] highly sophisticated proprietary breath sampling technique and analytical method to achieve high accuracy, sensitivity and specificity.”

The NUS spinout said its ongoing pilot study of 180 people, conducted by the city-state’s National Centre for Infectious Diseases, showed an overall sensitivity of 93% and a specificity of 95%. Breathonix said more trials will be required to improve and validate the accuracy of the technology.

By incorporating disposable mouthpieces and one-way valves, the company said its breathalyzer system could potentially be used for mass screening in high-traffic areas, such as airports, hotels, sports venues and transportation hubs. In addition, Breathonix estimated the total cost could reach $20 per test.

"The company is ready to deploy pilots in Singapore in the coming weeks and to extend to international pilots in the coming months, pending regulatory approval," said co-founder and Chief Operating Officer Du Fang.

Originally published by
Conor Hale | November 4, 2020
Fierce Biotech

Read more…
Gold Level Contributor

Cunjiang Yu, Bill D. Cook Associate Professor of Mechanical Engineering at UH, led a group of researchers that developed a cardiac patch made from fully rubbery electronics that can be placed directly on the heart to collect electrophysiological activity, temperature, heartbeat and other indicators, all at the same time.

Pacemakers and other implantable cardiac devices used to monitor and treat arrhythmias and other heart problems have generally had one of two drawbacks – they are made with rigid materials that can’t move to accommodate a beating heart, or they are made from soft materials that can collect only a limited amount of information.

Researchers led by a mechanical engineer from the University of Houston have reported in Nature Electronics a patch made from fully rubbery electronics that can be placed directly on the heart to collect electrophysiological activity, temperature, heartbeat and other indicators, all at the same time.

Cunjiang Yu, Bill D. Cook Associate Professor of Mechanical Engineering at UH and corresponding author for the paper, said the device marks the first time bioelectronics have been developed based on fully rubbery electronic materials that are compatible with heart tissue, allowing the device to solve the limitations of previous cardiac implants, which are mainly made out of rigid electronic materials.

“For people who have heart arrhythmia or a heart attack, you need to quickly identify the problem,” Yu said. “This device can do that.” Yu is also a principle investigator with the Texas Center for Superconductivity at UH.

In addition to the ability to simultaneously collect information from multiple locations on the heart – a characteristic known as spatiotemporal mapping – the device can harvest energy from the heart beating, allowing it to perform without an external power source. That allows it to not just track data for diagnostics and monitoring but to also offer therapeutic benefits such as electrical pacing and thermal ablation, the researchers reported.

Yu is a leader in the development of fully rubbery electronics with sensing and other biological capabilities, including for use in robotic handsskins and other devices. The epicardial bioelectronics patch builds upon that with a material with mechanical properties that mimic cardiac tissue, allowing for a closer interface and reducing the risk that the implant could damage the heart muscle.

“Unlike bioelectronics primarily based on rigid materials with mechanical structures that are stretchable on the macroscopic level, constructing bioelectronics out of materials with moduli matching those of the biological tissues suggests a promising route towards next-generational bioelectronics and biosensors that do not have a hard–soft interface for the heart and other organs,” the researchers wrote. “Our rubbery epicardial patch is capable of multiplexed ECG mapping, strain and temperature sensing, electrical pacing, thermal ablation and energy harvesting functions.”

In addition to Yu, researchers from UH, the Texas Heart Institute and the University of Chicago were involved. They include first authors Kyoseung Sim, Faheem Ershad and Yongcao Zhang, all with UH; Pinyi Yang, Hyunseok Shim, Zhoulyu Rao, Yuntao Lu and Anish Thukral, all with UH; Abdelmotagaly Elgalad, Yutao Xi and Doris A. Taylor with the Texas Heart Institute; and Bozhi Tian with the University of Chicago. Sim, a former member of the Yu group, is currently an assistant professor at the Ulsan National Institute of Science and Technology in Ulsan, Korea.

Originally published by
University of Houston News | November 2, 2020

Original article

Read more…
Silver Level Contributor

Kathryn Atkinson, a patient at Houston Methodist Hospital, participates in a smartphone screening test to analyze stroke-like symptoms she's experiencing. The test is powered by a machine learning algorithm developed by researchers at Penn State's College of Information Sciences and Technology and Houston Methodist Hospital, which could significantly reduce the amount of time it takes physicians to diagnose a stroke.  IMAGE: HOUSTON METHODIST HOSPITAL

UNIVERSITY PARK, Pa. — A new tool created by researchers at Penn State and Houston Methodist Hospital could diagnose a stroke based on abnormalities in a patient’s speech ability and facial muscular movements, and with the accuracy of an emergency room physician — all within minutes from an interaction with a smartphone.

“When a patient experiences symptoms of a stroke, every minute counts,” said James Wang, professor of information sciences and technology at Penn State. “But when it comes to diagnosing a stroke, emergency room physicians have limited options: send the patient for often expensive and time-consuming radioactivity-based scans or call a neurologist — a specialist who may not be immediately available — to perform clinical diagnostic tests.”

Wang and his colleagues have developed a machine learning model to aid in, and potentially speed up, the diagnostic process by physicians in a clinical setting.

“Currently, physicians have to use their past training and experience to determine at what stage a patient should be sent for a CT scan,” said Wang. “We are trying to simulate or emulate this process by using our machine learning approach.”

The team’s novel approach is the first to analyze the presence of stroke among actual emergency room patients with suspicion of stroke by using computational facial motion analysis and natural language processing to identify abnormalities in a patient’s face or voice, such as a drooping cheek or slurred speech.

The results could help emergency room physicians to more quickly determine critical next steps for the patient. Ultimately, the application could be utilized by caregivers or patients to make self-assessments before reaching the hospital.

“This is one of the first works that is enabling AI to help with stroke diagnosis in emergency settings,” added Sharon Huang, associate professor of information sciences and technology at Penn State.

To train the computer model, the researchers built a dataset from more than 80 patients experiencing stroke symptoms at Houston Methodist Hospital in Texas. Each patient was asked to perform a speech test to analyze their speech and cognitive communication while being recorded on an Apple iPhone.

“The acquisition of facial data in natural settings makes our work robust and useful for real-world clinical use, and ultimately empowers our method for remote diagnosis of stroke and self-assessment,” said Huang.

Testing the model on the Houston Methodist dataset, the researchers found that its performance achieved 79% accuracy — comparable to clinical diagnostics by emergency room doctors, who use additional tests such as CT scans. However, the model could help save valuable time in diagnosing a stroke, with the ability to assess a patient in as little as four minutes.

“There are millions of neurons dying every minute during a stroke,” said John Volpi, a vascular neurologist and co-director of the Eddy Scurlock Stroke Center at Houston Methodist Hospital. “In severe strokes it is obvious to our providers from the moment the patient enters the emergency department, but studies suggest that in the majority of strokes, which have mild to moderate symptoms, that a diagnosis can be delayed by hours and by then a patient may not be eligible for the best possible treatments.”

“The earlier you can identify a stroke, the better options (we have) for the patients,” added Stephen T.C. Wong, John S. Dunn, Sr. Presidential Distinguished Chair in Biomedical Engineering at the Ting Tsung and Wei Fong Chao Center for BRAIN and Houston Methodist Cancer Center. “That’s what makes an early diagnosis essential.”

Volpi said that physicians currently use a binary approach toward diagnosing strokes: They either suspect a stroke, sending the patient for a series of scans that could involve radiation; or they do not suspect a stroke, potentially overlooking patients who may need further assessment.

“What we think in that triage moment is being either biased toward overutilization (of scans, which have risks and benefits) or underdiagnosis,” said Volpi, a co-author on the paper. “If we can improve diagnostics at the front end, then we can better expose the right patients to the right risks and not miss patients who would potentially benefit.”

He added, “We have great therapeutics, medicines and procedures for strokes, but we have very primitive and, frankly, inaccurate diagnostics.”

Other collaborators on the project include Tongan Cai and Mingli Yu, graduate students working with Wang and Huang at Penn State; and Kelvin Wong, associate research professor of electronic engineering in oncology at Houston Methodist Hospital.

The team presented their paper, “Toward Rapid Stroke Diagnosis with Multimodal Deep Learning,” last week at the virtual 23rd International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI).

Penn State has also filed a provisional patent application jointly with Houston Methodist on the computer model.

Originally published by
Jessica Hallman | October 20, 2020
Penn State News

original article

Read more…
Silver Level Contributor

Researchers in Europe are working on elastic membrane patches that mimic how the skin looks and feels and can collect information related to the wearer. Image credit - Aaron Lee/Unsplash

Picture this: You’ve experienced no physical sensation beyond your wrists for years, then a doctor drapes a thin, flexible membrane over your hand and, like magic, you can feel the trickle of water through your fingers again.

This may sound like an outlandish scenario, but it’s not. Researchers across Europe are making rapid progress towards developing elastic membrane patches that mimic the human skin either in looks, functionality, or both.

Electronic skin (e-skin) is categorised as an ‘electronic wearable’ – that is, a smart device worn on, or near, the surface of the skin to extract and analyse information relating to the wearer. A better-known electronic wearable is an activity tracker, which typically senses movement or vibrations to give feedback on a user’s performance. More advanced wearables collect data on a person’s heart rate and blood pressure.

Developers of e-skins, however, are setting their sights higher. Their aim is to produce stretchy, robust, flexible membranes that incorporate advanced sensors and have the ability to self-heal. The potential implications for medicine and robotics are immense.

Central nervous system

Already in circulation are skin-like membranes that adhere to the surface of the body and detect pressure, strain, slip, force and temperature. Others are being created to recognise biochemical changes that signal disease. A number of projects are working on skins that will envelop robots or human prosthetics, giving these machines and instruments the ability to manipulate objects and perceive their environments with a high degree of tactile sensitivity. And the dream, of course, is to develop an e-skin that can connect with the central nervous system of the wearer (someone who is paralysed, for instance), thereby restoring sensation that has been lost through disease or trauma.

With their project called PepZoSkin, researchers at Tel Aviv University in Israel are on a journey they believe will eventually turn this dream into a reality. Within a decade, they believe artificial skin patches will be sufficiently advanced to alert wearers to dangers they are not able to perceive naturally.

‘I have a friend in a wheelchair who has no sensation in his legs – he has no idea if hot coffee has spilled on his legs,’ said research associate Dr Sharon Gilead. ‘The idea is that a skin patch on his leg will give a signal – maybe a red light – that will tell him when something is wrong, saving him from a severe burn.

‘This will be a first step. And as we progress on this mission, we will get the thin layer (e-skin) to talk to the nervous system, replacing the sense of feeling that’s missing. Though this is still a little distant, it’s definitely the direction (we’re moving in).’

The Tel Aviv team is developing a skin that will extract and analyse health information without requiring an external source of power. The membrane will be self-powered thanks to a phenomenon known as piezoelectricity. This refers to an electric charge that accumulates in certain materials (including bone, DNA and certain proteins) in response to applied mechanical stress. In short, when you press on an e-skin made from piezoelectric material, even very gently, it will generate an electric charge. Add a circuit, and this electricity can be put to use – it could power a pacemaker, for instance.

For a person with paralysis, the hot spilled drink would create a deformation of the e-skin that would be read by the skin as a mechanical pressure, and this in turn would be translated into an electrical signal. This signal might then trigger that warning light or a sound.

"As we progress on this mission, we will get the thin layer (e-skin) to talk to the nervous system, replacing the sense of feeling that’s missing."  
Dr Sharon Gilead, Tel Aviv University, Israel 


The challenge right now is to find piezoelectric materials that are non-toxic to the body. Gal Fink, PhD student and PepZoSkin researcher, said: ‘The piezoelectric materials in use today contain lead, making them damaging to the body. We focus on biological molecules and bio-inspired molecules (that is, lab-made molecules that imitate those found in the body).’

Professor Ehud Gazit, who leads the project, explained the significance of finding piezoelectric materials that can be developed into safe products. ‘Our current work on the piezoelectric peptide materials will result, very soon, in lead-free products that work as well as the toxic, lead-filled products that are currently available, except of course our new materials will be far better because they will be safe to use in contact with the human body, and even as implants.’

Prof. Gazit’s team expects to take their work to the next level early next year. By then, they hope to have chosen their organic molecule and optimised it for piezoelectric activity. Next, they plan to develop it into functional nanodevices. They believe that in time, these will be used extensively in biological and medical applications, serving as energy harvesters and biosensors, transmitting vital information directly from human tissue and back to the user or a third party.


Biosensing is also at the core of A-Patch, another e-skin project. With her team at the Israel Institute of Technology (Technion) in Haifa, scientific project manager Dr Rotem Vishinkin has developed a patch based on a ‘crazy idea’ that Professor Hossam Haick, the project coordinator, had almost a decade ago that infectious diseases could be detected quickly and dependably through the skin.

‘We’d already found a way to use breath analysis to discriminate between diseases, so we thought it might be feasible to use a patch on the skin to “smell” the body for conditions,’ she explained.

She was particularly keen to find a quick, non-invasive way to test for tuberculosis (TB) – a highly contagious disease that is particularly prevalent in countries of the developing world. TB affects 10 million people annually and kills 1.4 million. Early detection is important, as transmission can be contained once a diagnosis has been made, and antibiotics are most effective when the infection is new.

Typically, TB is diagnosed from the sputum a patient coughs up, but research suggests that many people are unable to produce the quality sample needed to yield an accurate result. What’s more, it can take up to two weeks for a test result to be delivered, especially in remote communities where samples travel big distances to reach a lab, giving the disease extra days or even weeks to run amok.

The aim of A-Patch is to develop a cheap, efficient alternative to the sputum test. The ultra-thin, flexible patch uses chemical sensors to detect changes in the body’s organic compounds that are triggered when the TB bacterium takes hold. Dr Vishinkin says that soon-to-be-published research funded by the Bill and Melinda Gates Foundation shows the A-Patch, when worn for an hour, delivers a TB diagnosis with a 90% accuracy rate. The team hopes to reduce the wearing time to five minutes, with the patch being applied to the arm.

A disposable A-Patch will cost between one to two US dollars, and require no lab equipment other than an electronic reader that a doctor can use to activate a patch and interpret the results. The Technion team has already lined up an industrial partner specialising in diagnostic kits to help bring the product to market. Dr Vishinkin is hopeful that a viable test will be rolled out within the next few years.

‘We estimate the available market for these kits is 71 million tests per year,’ she said. ‘And because a patch can be used at home, you don’t need to worry about the stigma of going to a TB clinic to be tested. It means people will be more willing to step forward.’

The precise mechanisms for transmitting results from the patch to the reader are still being worked out. ‘We have partners in the fields of electronic circuits, sensors, data analysis to help us with aspects of the project,’ said Dr Vishinkin.

In time, Dr Vishinkin anticipates developing a patch intended for longer-term use – to monitor the effectiveness of a TB treatment regime over a number of weeks, for instance. However, there’s a high chance a patch will become ripped or damaged with extended use, rendering it ineffective. To mitigate this risk, the project’s scientists have developed a mechanism of patch self-repair, which enables the matrix of peptide bonds in a membrane to make fresh networks once damage has been detected, restoring the integrity of the e-skin.

‘Every day brings us closer to our target of creating a fast, reliable, simple diagnostic tool for TB,’ said Dr Vishinkin. ‘And we won’t stop here. What we’re creating is a platform for detecting diseases, not just a kit for a specific disease. We could easily switch to Covid-19 next.’

The research in this article was funded by the EU. If you liked this article, please consider sharing it on social media.

Originally published by
Vittoria D'Alessio | October 28, 2020


Read more…
Gold Level Contributor

Getty Images

The method could lead to better therapeutics for treating glioblastoma or help better identify those with aggressive forms of the disease.

A laboratory test developed by a research team led by Johns Hopkins University bioengineers can accurately pinpoint, capture, and analyze the deadliest cells in glioblastoma, the most common and aggressive brain cancer in adults.

The method's ability to capture the invasive proliferating cells in the fatal condition could lead to the discovery of new drugs to prevent or slow the cancer's spread. The test can also accurately predict which patients have the least or most aggressive form of glioblastoma.

The findings are described in a paper published in Nature Biomedical Engineering.

"Because we have the unique ability to identify those deadly cells, we envision utilizing this platform to screen potential therapeutics in order to effectively block the invasion and/or proliferation of these cells and ultimately prolong the survival of patients by putting precision medicine in practice," said Konstantinos Konstantopoulos, a professor of chemical and biomolecular engineering, biomedical engineering, and oncology at Johns Hopkins and senior author on the paper. "By subjecting these deadly cells to proteogenomic analysis, we will identify and characterize novel targets to stop these highly invasive and proliferative cells."

Current testing technologies have not been able to effectively predict glioblastoma outcomes that are specific to each patient, according to the paper. And what methods do exist for single cell analysis are too time consuming, expensive and "impractical for informing patient care given the short survival span of patients with glioblastoma," the paper states. Median survival times for the cancer ranges between nearly 6 months for the least aggressive type and about 29 months for the most aggressive.

The Johns Hopkins team, which collaborated with researchers from the Mayo Clinic and Stanford University, demonstrated last year the ability of their test to distinguish metastatic from non-metastatic breast cancer cells, suggesting that it may be applicable to other solid cancers. The Johns Hopkins team received a U.S. patent for the test, called Microfluidic Assay for quantification of Cell Invasion (MAqCI), that requires just a small number of cells to be placed into an apparatus that resembles a fluid-filled ant farm with Y-shaped tunnels to mimic vascular conduits in the brain. The cells can be scored as the most lethal based on three key elements of metastasis: the ability to move, to compress and squeeze through tight branch channels, and to reproduce.

The method requires more testing with a larger sample of patients, but its accuracy in predicting length of survival varied from 86% in a blind retrospective study involving 28 patients to 100% in a blind prospective study with 5 patients.

Originally published by
Doug Donovan | October 26, 2020
HUB - Johns Hopkins University


Read more…
Silver Level Contributor

In the future, this model could be applied to a consented phone conversation, or through an app on a mobile phone, Ajay Royyuru said. It could become a test that people take routinely, to get snapshots of their cognitive state throughout their life. (IBM Research)

Doctors use brain scans and spinal taps to diagnose Alzheimer’s disease, but these methods can be expensive, invasive and usually aren’t done until after a person has shown signs of cognitive decline—at which point it can be difficult to head off the progression of the disease.

IBM thinks it has the beginnings of a solution: an artificial intelligence model, developed with Pfizer, that could eventually predict if a person will develop Alzheimer’s using a simple language test. This model, trained and tested on data from a decades-long health study, correctly predicted whether healthy people would eventually develop Alzheimer’s 74% of the time. The findings appear in the journal The Lancet eClinical Medicine.

The researchers trained AI algorithms on hundreds of short, noninvasive, standardized speech samples from the Framingham Heart Study, a well-known study tracking various health measures in more than 5,000 people and their families since the 1940s. The samples are from a test called the Cookie Theft Task, where people are asked to describe a drawing in their own words. 

“Through our work in other disease conditions, we know this is an extremely descriptive cognitive assessment. Each person uses cognitive processes to visually receive input, think about it and turn that into a bunch of words,” said Ajay Royyuru, IBM Fellow and vice president of healthcare and life sciences research at IBM. “It’s a round-trip of cognitive processes occurring in the head.”

Participants in the Framingham Heart Study took the test over and over again, providing their own baseline data and allowing the researchers to look into how early the tests could signal cognitive decline, Royyuru added.

The researchers tested the model on speech samples taken from 80 people in the study before they showed signs of cognitive decline. The model predicted the onset of Alzheimer’s disease an average of seven and a half years before patients were officially diagnosed, the study found. But the test won’t necessarily replace today’s clinical standard; it could be used, rather, as an early step to recommend a brain scan.

“It doesn’t mean diagnosing Alzheimer’s at that point, but simply being able to see the abnormality here. It’s enough to consult a neurologist to get to an assessment that is more thorough,” Royyuru said. But further on, IBM’s work could lead to the development of new digital biomarkers and noninvasive tests that may move the timeline of Alzheimer’s diagnosis earlier, he said.

In the future, this model could be applied to a consented phone conversation, or through an app on a mobile phone, Royyuru said. It could become a test that people take routinely, to get snapshots of their cognitive state throughout their life.

The ability to catch Alzheimer’s early could play a key role in developing new treatments for the disease, an area that has been rife with failure. Digital biomarkers that can quantitatively gauge the progression of a person’s disease could help drug developers design better Alzheimer’s trials.

“If we look back to some failed Alzheimer’s clinical trials, they’ve not had success because the majority of the time, they recruited patients who were all over the place in progression from a disease biology viewpoint,” Royyuru said. In other words, it’s no surprise that a drug that should work at a certain stage of the disease would fail in heterogeneous population of patients at different stages.

“There is a need to characterize the state of an individual through biomarkers, particularly through digital biomarkers that allow for targeted staging and recruitment into appropriate interventional trials,” he added.

Speech and language are just one aspect of cognitive function that IBM is looking at. In a study published early last year, IBM researchers described a model that assesses proteins in the blood to predict the buildup of amyloid-beta in people’s spinal fluid, a key marker of Alzheimer’s disease. The test was accurate up to 77% of the time.

Originally published by
by Amirah Al Idrus | October 22, 2020
Fierce Biotech

Read more…
Gold Level Contributor
All of the components in the sensor device are easy to mass-produce, so the researchers estimate that each device would cost around $10.   CreditsImage: David Sadat
A wearable sensor to help ALS patients communicate
Researchers have designed a skin-like device that can measure small facial movements in patients who have lost the ability to speak.

People with amyotrophic lateral sclerosis (ALS) suffer from a gradual decline in their ability to control their muscles. As a result, they often lose the ability to speak, making it difficult to communicate with others.

A team of MIT researchers has now designed a stretchable, skin-like device that can be attached to a patient’s face and can measure small movements such as a twitch or a smile. Using this approach, patients could communicate a variety of sentiments, such as “I love you” or “I’m hungry,” with small movements that are measured and interpreted by the device.

The researchers hope that their new device would allow patients to communicate in a more natural way, without having to deal with bulky equipment. The wearable sensor is thin and can be camouflaged with makeup to match any skin tone, making it unobtrusive.

“Not only are our devices malleable, soft, disposable, and light, they’re also visually invisible,” says Canan Dagdeviren, the LG Electronics Career Development Assistant Professor of Media Arts and Sciences at MIT and the leader of the research team. “You can camouflage it and nobody would think that you have something on your skin.”

The researchers tested the initial version of their device in two ALS patients (one female and one male, for gender balance) and showed that it could accurately distinguish three different facial expressions — smile, open mouth, and pursed lips.


MIT graduate student Farita Tasnim and former research scientist Tao Sun are the lead authors of the study, which appears today in Nature Biomedical Engineering. Other MIT authors are undergraduate Rachel McIntosh, postdoc Dana Solav, research scientist Lin Zhang, and senior lab manager David Sadat. Yuandong Gu of the A*STAR Institute of Microelectronics in Singapore and Nikta Amiri, Mostafa Tavakkoli Anbarani, and M. Amin Karami of the University of Buffalo are also authors.

A skin-like sensor

Dagdeviren’s lab, the Conformable Decoders group, specializes in developing conformable (flexible and stretchable) electronic devices that can adhere to the body for a variety of medical applications. She became interested in working on ways to help patients with neuromuscular disorders communicate after meeting Stephen Hawking in 2016, when the world-renowned physicist visited Harvard University and Dagdeviren was a junior fellow in Harvard’s Society of Fellows.

Hawking, who passed away in 2018, suffered from a slow-progressing form of ALS. He was able to communicate using an infrared sensor that could detect twitches of his cheek, which moved a cursor across rows and columns of letters. While effective, this process could be time-consuming and required bulky equipment.

Other ALS patients use similar devices that measure the electrical activity of the nerves that control the facial muscles. However, this approach also requires cumbersome equipment, and it is not always accurate.

“These devices are very hard, planar, and boxy, and reliability is a big issue. You may not get consistent results, even from the same patients within the same day,” Dagdeviren says.

Most ALS patients also eventually lose the ability to control their limbs, so typing is not a viable strategy to help them communicate. The MIT team set out to design a wearable interface that patients could use to communicate in a more natural way, without the bulky equipment required by current technologies.

The device they created consists of four piezoelectric sensors embedded in a thin silicone film. The sensors, which are made of aluminum nitride, can detect mechanical deformation of the skin and convert it into an electric voltage that can be easily measured. All of these components are easy to mass-produce, so the researchers estimate that each device would cost around $10.

The researchers used a process called digital imaging correlation on healthy volunteers to help them select the most useful locations to place the sensor. They painted a random black-and-white speckle pattern on the face and then took many images of the area with multiple cameras as the subjects performed facial motions such as smiling, twitching the cheek, or mouthing the shape of certain letters. The images were processed by software that analyzes how the small dots move in relation to each other, to determine the amount of strain experienced in a single area.

“We had subjects doing different motions, and we created strain maps of each part of the face,” McIntosh says. “Then we looked at our strain maps and determined where on the face we were seeing a correct strain level for our device, and determined that that was an appropriate place to put the device for our trials.”

The researchers also used the measurements of skin deformations to train a machine-learning algorithm to distinguish between a smile, open mouth, and pursed lips. Using this algorithm, they tested the devices with two ALS patients, and were able to achieve about 75 percent accuracy in distinguishing between these different movements. The accuracy rate in healthy subjects was 87 percent.

“The continuous monitoring of facial motions plays a key role in nonverbal communications for patients with neuromuscular disorders. Currently, the mainstream approach is camera tracking, which presents a challenge for continuous, portable usage,” says Takao Someya, a professor of electrical engineering and information systems and dean of the School of Engineering at the University of Tokyo, who was not involved in the study. “The authors have successfully developed thin, wearable, piezoelectric sensors that can reliably decode facial strains and predict facial kinematics.”

Enhanced communication

Based on these detectable facial movements, a library of phrases or words could be created to correspond to different combinations of movements, the researchers say.

“We can create customizable messages based on the movements that you can do,” Dagdeviren says. “You can technically create thousands of messages that right now no other technology is available to do. It all depends on your library configuration, which can be designed for a particular patient or group of patients.”

The information from the sensor is sent to a handheld processing unit, which analyzes it using the algorithm that the researchers trained to distinguish between facial movements. In the current prototype, this unit is wired to the sensor, but the connection could also be made wireless for easier use, the researchers say.


The researchers have filed for a patent on this technology and they now plan to test it with additional patients. In addition to helping patients communicate, the device could also be used to track the progression of a patient’s disease, or to measure whether treatments they are receiving are having any effect, the researchers say.

“There are a lot of clinical trials that are testing whether or not a particular treatment is effective for reversing ALS,” Tasnim says. “Instead of just relying on the patients to report that they feel better or they feel stronger, this device could give a quantitative measure to track the effectiveness.”

The research was funded by the MIT Media Lab Consortium, the National Science Foundation, and the National Institute of Biomedical Imaging and Bioengineering.

Originally published by
Ann Trafton | MIT News Office | October 22, 2020

Read more…
Silver Level Contributor

Eye surgeons currently do not have a reliable method to perform a quantitative measurement of corneal elasticity in patients before a procedure. Photo courtesy: GettyImages

LASIK eye surgery – a laser reshaping of the cornea to improve vision - is one of the most popular elective surgeries in the United States, and a University of Houston professor of biomedical engineering intends to improve upon it by giving surgeons more information about the cornea before they begin.  

Specifically, Kirill Larin wants to provide measurement of corneal elasticity, a key component of visual acuity. Eye surgeons currently do not have a reliable method to perform a quantitative measurement of corneal elasticity in patients before the procedure.   

“We will develop a novel method for the imaging and assessment of corneal elastic properties that could potentially be used for routine clinical diagnostics of different corneal diseases and treatment,” said Larin, who is using a $1.6 million continuation grant from the National Eye Institute to improve current Optical Coherence Tomography (OCT) to provide ultrafast 3D clinical imaging. The technology will combine Brillouin microscopy with Optical Coherence Tomography (OCT) and Optical Coherence Elastography (OCE) – creating the new BOE. 

The new BOE technology uses highly localized air pressure stimulation.  

“We’re going to use an air puff that will produce very small waves on the surface of the eye. The patient will not feel them, but we will be able to detect them. The speed of the waves will tell us about the elasticity of the cornea,” said Larin.  Using OCT, he will reconstruct volumetric biomechanical properties of the cornea.  

Larin already developed a first prototype of the combined instrument, demonstrated its capability to measure biomechanical properties of the cornea in vitro and in vivo, and has developed analytical models to extract biomechanical properties. The new grant, he said, will accelerate transition of this technology into clinics, influence the selection and application of corneal surgical treatments and will help understand the structural consequences of corneal disease and wound healing. 

Larin’s previous work made fundamental advances in the understanding of corneal biomechanics, which influence clinical interpretation of diagnostic tests, e.g. measurement of intraocular pressure, and have been implicated as important factors in the development of glaucoma.  

“Our technology will optimize the delivery of health care to the eye and deliver an early diagnosis for many eye conditions.” 

Collaborating on the project with Larin are Michael Twa, dean of the UH College of Optometry and  Salavat Aglyamov, research assistant professor of mechanical engineering.  

Originally published by
Laurie Fickman | October 20, 2020
University of Houston

Read more…
Silver Level Contributor

Image: Fotalia

Dive Brief:

Medical device funding hit a new high in the third quarter, growing 63% year on year to top $5 billion for the first time in CB Insights’ dataset.

Investments in robotic surgery startups was a major driver of the increase. The analysts listed the progress of neuromodulation devices and Medtronic’s deals in diabetes and neurosurgery as other medical device highlights of the quarter.

The big quarter for device investment was part of a broader uptick in healthcare activity. CB Insights also tracked funding records in digital health and telehealth, largely due to a jump in the number of companies raising mega-rounds worth upward of $100 million.

Dive Insight:

There were signs early in the pandemic that the crisis could constrain access to capital. Consultancy EY tracked a 22% drop in medtech venture capital funding over a 12-month period, in part due to a steep decline as the coronavirus spread in the second quarter. However, CB Insights’ third quarter report is considerably more upbeat.

The value and volume of medical device deals grew sequentially and year on year in the third quarter as companies entered into 478 agreements worth $5.1 billion. The median value of deals over the previous 11 quarters was $3.4 billion. The sheer number of deals was a new record, too.

CB Insights identified three trends in its analysis of medical device dealmaking activity in the third quarter. The analysts highlighted a clutch of robotic surgery deals led by the $77 million raised by PROCEPT BioRobotics, the manufacturer of a robotic system for delivering aquablation therapy. The market intelligence firm also highlighted earlier-stage investments in Monteris Medical, Vicarious Surgical and NDR Medical Technologies.

The analysts selected novel neuromodulation devices as their second highlight. Neuromodulation leaders blamed a lack of innovation for the slowdown of the sector last year. CB Insights sees signs innovation is picking up again, pointing to a financing at Neurovalens and updates from SetPoint Medical, Spark Biomedical and Synchron to make its case.

Medtronic, one of the neuromodulation leaders that suffered a slowdown last year, is the focus of CB Insights’ other quarterly highlight. The analysts identified Medtronic as an active investor and buyer in the third quarter, when it struck deals to acquire Medicrea and Companion Medical and invest in Sinovation and Triple Jump.

Other sections of the report detail new fundraising highs for other parts of the broader medtech industry. The digital health industry had a bumper quarter, pulling in $8.4 billion across 502 deals. CB Insights had never previously reported a quarter worth $6 billion or more and last tracked a $5 billion quarter in the first half of last year.

The surge in total deal value happened despite a modest increase in volumes, reflecting the rise of large, late-stage agreements. CB Insights tracked 23 digital health mega-rounds in the third quarter. There were only 34 mega-rounds across the previous year. CB Insights’ definition of digital health was broad enough to cover investments in liquid biopsy startups Freenome and Thrive Earlier Detection

CB Insights included telehealth deals in its digital health figures and broke them out for a separate analysis. The analysts tracked $2.8 billion in telehealth deals, up 72% compared to the second quarter. The jump was driven by five large deals, led by the $275 million investment in VillageMD, that accounted for more than 30% of all telehealth funding in the third quarter.

Originally published by
Nick Paul Taylor | October 20, 2020

Read more…
Silver Level Contributor

The system's AI helps automate measuring the heart muscle and gauging rates of blood flow, to help speed up and standardize echocardiogram exams. (Image: GE Healthcare)

GE Healthcare nabbed FDA clearance for its artificial intelligence-powered cardiovascular ultrasound system, designed to help automate and standardize echocardiogram exams.

The Ultra Edition package for the company’s line of Vivid ultrasounds incorporates learning algorithms that can automatically detect the points in a 2D image that are used to measure the size of the left ventricle, an important metric in diagnosing and treating heart failure and disease.

The system also semi-automatically detects measurements of blood flow and velocity within the body. And at the same time, it identifies the transducer angle used by the technician and labels each image accordingly, which simplifies the technician's work and makes later review of the images easier. Combined, the features aim to save about 7 to 10 minutes per scan, according to the company.

“With the Vivid Ultra Edition, we offer AI capabilities that help address healthcare providers’ two key challenges in echo exams—how time-consuming the exam is and the degree of variability that exists in the quantitative results,” Dagfinn Saetre, GE Healthcare’s general manager of cardiovascular ultrasound, said in a statement.

In addition, the AI can help reproduce personalized exams, as patients come in for subsequent checkups to monitor their disease progression, the company said.

Beyond heart disease itself, echocardiograms have been used to screen incoming patients with COVID-19 for potentially serious heart-related complications. 

Earlier this year the FDA cleared several AI and ultrasound technologies for use against the coronavirus, including Caption Health’s system for guiding technicians through the procedure and obtaining a clearer image. The agency also granted an emergency authorization to Eko, for its algorithms that use the heart’s electrical signals to measure ejection fraction. 

Separately, GE Healthcare also launched Edison HealthLink, designed to help clinicians collate health data from various sources on-site, for fast responses during critical, time-sensitive situations such as treating strokes.

The on-site computing technology can evaluate brain scans without uploading the data to the cloud and potentially waiting for a response, the company said.

“As more care delivery becomes virtual and as more healthcare data moves to the cloud, technologies like Edison HealthLink provide a bridge, allowing devices to operate on premise, at the edge and in the cloud,” said GE Healthcare’s chief digital officer, Amit Phadnis.

Originally published by
Conor Hale | October 14, 2020
Fierce Biotech

Read more…
Bronze Level Contributor

Olympus plans to start with ENDO-AID’s commercial launch in Europe this November, followed by expansions into certain Middle Eastern, African and Asia-Pacific countries. (Olympus)

Olympus has begun rolling out an artificial-intelligence-powered platform for its new endoscope, designed to automatically spot suspicious lesions and polyps during a colonoscopy in real time.

The ENDO-AID program works in combination with the company’s Evis X1, which it launched in April. The AI’s machine-learning algorithm highlights areas of interest whenever a lesion appears on the video screen, including potential cancers and benign tumors.

Olympus also described plans to expand the technology’s use to other therapy and screening areas in the future, including for inspections of the esophagus, stomach and other gastrointestinal organs.

“Especially in AI, we recognize the power of elevating endoscopic imaging to uncharted levels,” said Frank Drewalowski, head of Olympus’ endoscopic solutions division. “Considering ENDO-AID as a first step, we are planning additional AI-powered applications for image detection and characterization—not only for colonoscopy.”

The company plans to start with ENDO-AID’s commercial launch in Europe this November, followed by expansions into certain Middle Eastern, African and Asia-Pacific countries. The system’s use in Japan, the Americas and China are slated for a later date after securing regulatory clearances. 

Earlier this month, Olympus received 510(k) clearances from the FDA for two colonoscopes, the PCF-H190T and the PCF-HQ190.

The former is a slimmer, short-bending, high-definition scope made for turning tighter corners and is used to help classify colorectal polyps. The latter, meanwhile, is equipped with dual-focus hardware for visualizing the intestine’s mucus membranes, with a 170-degree field-of-view. Both are used with the Evis Exera III imaging platform.

In addition, the company announced a new automated device for cleaning and processing endoscopes after use. The OER-Elite is designed to disinfect up to two endoscopes at once in a half-hour using a combination of ultrasonic waves and detergents.

Originally published by
Conor Hale | October 12, 2020
Fierce Biotech

Read more…
Gold Level Contributor

The first quarter of 2020 was the slowest period for medtech IPOs in years, with just one company listing to raise $16 million. Activity picked up over the summer, when as many companies went public in one week as listed over the first five months of the year. The resurgence of activity was driven by one sector: Cancer testing. 

AnPac Bio-Medical Science, a Sino-American cancer screening company, became the first medtech to list on Nasdaq this year in January but only after downsizing its offering and pricing at the bottom of its target range. The stock has fallen more than 60% since the IPO. 

That marked the highpoint of the first quarter. With COVID-19 disrupting operations and roiling financial markets, no other medtech companies completed Nasdaq IPOs in the first quarter.

The tide turned in June when a flurry of cancer diagnostic companies went public. Buoyed by strong support from existing investors, Burning Rock, a Chinese cancer diagnostics company, priced an IPO above the target range. The next week, Genetron, another Chinese cancer testing business, smashed its target range and upsized its offering. Progenity, a U.S. provider of prenatal and cancer risk tests, priced its IPO at the midpoint of the range on the same day as Genetron listed.

Amid that activity, ArcherDX filed to raise $100 million to bring its tumor profiling test to market. The IPO was aborted when ArcherDX agreed to accept a cash-and-stock buyout bid from Invitae.

The cancer diagnostics sector was deprived of a showpiece 2020 IPO when Illumina made an offer to buy Grail weeks after the screening business filed to go public. The takeover is yet to close but as it stands Grail is set to rejoin its parent company rather than list on Nasdaq.

The cancer testing companies that did go public have had mixed fortunes. As it stands, Burning Rock’s stock has fared best, typically trading around $10 above its $16.50 offering price. The stock closed last week at $26.50. Genetron, in contrast, has rarely traded close to its $16 offering price and closed last week at $11.51. Progenity has had a similar experience, pricing at $15 but spending most of its time on public markets below $10.

Companies remain interested in going public, though. Earlier this month, Biodesix filed to raise $75 million to support commercialization of a roster of blood-based tests designed to assess the risk of lung cancer and measure tumor mutations and assess the patient’s immune system after a diagnosis. Biodesix also provides COVID-19 molecular and serology tests.

Non-cancer medtech IPOs accelerate

The rhythm of IPOs in the broader medtech industry in 2020 has followed a similar pattern to that of the cancer testing niche. Lyra Therapeutics became the first medtech company to list on Nasdaq in 2020 at the start of May, when it raised $56 million to develop products based on its bioresorbable polymeric matrix technology. 

Venous disease specialist Inari Medical pulled off an upsized IPO later in May and was joined on Nasdaq by kidney disease diagnostic player RenalytixAI in July. Acutus Medical priced its IPO in August, securing $159 million to support work on the diagnosis and treatment of cardiac arrhythmias.

The trickle of IPO activity seen over the summer accelerated significantly as the fall approached. Portable dialysis startup Outset Medical raised $242 million in September, after which Pulmonx and Aziyo Biologics priced IPOs in quick succession. Among those offerings, digital health startup G Medical Innovations and spinal surgery player Spine Elements set their target ranges. 

With most of the non-cancer medtech IPOs happening in the past 10 weeks, there is a limited set of data to assess how companies are faring on public markets. As it stands, LyraRenalytixAI and Aziyo are trading between 12% and 26% below their IPO prices. The other four companies are well above their IPO prices. 

Inari is leading the way, having seen its share price soar 281% in its short time on public markets, and Pulmonx is in second following its 122% increase. Outset Medical and Acutus are up 57% and 68%, respectively.

There is little that links the early winners of the medtech IPO class of 2020. Outset Medical sells a portable dialysis machine that is now cleared for home use, positioning the company to benefit from the anticipated shift away from in-center treatment that market leader Fresenius Medical Care talked about last week. Yet, Inari, Pulmonx and Acutus are all active in areas that have been negatively affected by the pandemic.

The presence of a pool of medtech companies that have found public investors receptive to their pitches has implications for the broader industry. Consultancy EY recently predicted a surge in M&A, in part due to the waning of IPOs as a viable way to raise money and provide an exit for private investors. If private medtech companies think they can go the IPO route, they may be less receptive to buyout bids from larger players.

Originally published by
Nick Paul Tayler | October 12, 2020
Medtech Dive

Read more…

JAAGNet MedTech Trends


MedTech Strategist - Innovation Summit Dublin

BIOMEDevice - Boston 2021

  • Description:

    BIOMEDevice - Boston 2021

    Rescheduled to May 2021

    2,200+ industry professionals and 335+ suppliers convene in Boston — home to the nation's highest number of medical device companies — for a two-day event focused on moving medtech design projects to the next stage of development.

    BIOMEDevice lets you explore the trends driving advancements in healthcare as well as discover…

  • Created by: Kathy Jones
  • Tags: biomedevice, boston, north america, medtech


  • Description:


    New dates announced – June 29 – June 30, 2021

    Med-Tech Innovation Expo is the UK & Ireland’s leading event for medical design and manufacturing technology.

    Experience live demonstrations of the latest machine, technology, products and services while networking with 4,000 + designers, engineers, innovators and…

  • Created by: Kathy Jones
  • Tags: med-tech, summit, june 2021, nec, birmingham, uk

JAAGNet MedTech Blog Archive

See Original | Powered by elink

JAAGNet MedTech Video Playlist