Biomedical Physics & Engineering Express - IOPscience
Biomedical Physics & Engineering Express
Institute of Physics and Engineering in Medicine
IPEM's aim is to promote the advancement of physics and engineering applied to medicine and biology for the public benefit. Its members are professionals working in healthcare, education, industry and research.
IPEM publishes scientific journals and books and organises conferences to disseminate knowledge and support members in their development. It sets and advises on standards for the practice, education and training of scientists and engineers working in healthcare to secure an effective and appropriate workforce.
Purpose-led Publishing
is a coalition of three not-for-profit publishers in the field of physical sciences: AIP Publishing, the American Physical Society and IOP Publishing.
Together, as publishers that will always put purpose above profit, we have defined a set of industry standards that underpin high-quality, ethical scholarly communications.
We are proudly declaring that science is our only shareholder.
SUPPORTS OPEN ACCESS
A broad, inclusive, rapid review journal devoted to publishing new research in all areas of biomedical engineering, biophysics and medical physics, with a special emphasis on interdisciplinary work between these fields.
Why choose this journal?
Trustworthy science backed by rigorous peer review
Inclusive publishing practices focused on scientific validity
Find out more
about our scope
Submit
an article
opens in new tab
Track my article
opens in new tab
RSS
Sign up for new issue notifications
Median submission to first decision before peer review
9 days
Median submission to first decision after peer review
54 days
Impact factor
1.6
Citescore
2.5
Full list of journal metrics
The following article is
Open access
Biomaterials to biofabrication: advanced scaffold technologies for regenerative endodontics
Arun Mayya
et al
2026
Biomed. Phys. Eng. Express
12
012001
View article
, Biomaterials to biofabrication: advanced scaffold technologies for regenerative endodontics
PDF
, Biomaterials to biofabrication: advanced scaffold technologies for regenerative endodontics
Scaffold systems are fundamental to regenerative endodontics, functioning as structural frameworks and delivery vehicles for bioactive cues essential to tissue regeneration. This review comprehensively examines scaffold types, functions, and translational challenges in endodontic regeneration. Scaffolds are classified into natural, synthetic, and hybrid matrices with unique mechanical and biological profiles. Advances in nanotechnology, 3D and 4D bioprinting, and smart biomaterials have significantly improved scaffold functionality. Smart scaffolds enable the controlled release of growth factors, antimicrobial agents, and gene-functionalized molecules, facilitating angiogenesis, stem cell differentiation, and infection control. Hybrid scaffolds, such as those combining collagen and gelatin methacryloyl (GelMA), provide customized degradation, biocompatibility, and mechanical strength. Innovative systems such as magnetic nanoparticle-triggered release and responsive hydrogels address vascularization and immune modulation limitations. Clinically, platelet-rich fibrin (PRF), concentrated growth factor (CGF), and decellularized extracellular matrix (dECM) have shown success in promoting root development, pulp vitality, and periapical healing. Despite these advances, obstacles remain, including regulatory hurdles, standardization of protocols, and long-term clinical validation. Integrating AI-driven scaffold design, digital twin simulations, and organ-on-chip models holds promise for personalized therapies. Establishing scaffold-based regeneration as a standard clinical approach will require harmonized practices, scalable biomaterial production, and robust clinical outcome assessments.
The following article is
Open access
TIGRE: a MATLAB-GPU toolbox for CBCT image reconstruction
Ander Biguri
et al
2016
Biomed. Phys. Eng. Express
055010
View article
, TIGRE: a MATLAB-GPU toolbox for CBCT image reconstruction
PDF
, TIGRE: a MATLAB-GPU toolbox for CBCT image reconstruction
In this article the Tomographic Iterative GPU-based Reconstruction (TIGRE) Toolbox, a MATLAB/CUDA toolbox for fast and accurate 3D x-ray image reconstruction, is presented. One of the key features is the implementation of a wide variety of iterative algorithms as well as FDK, including a range of algorithms in the SART family, the Krylov subspace family and a range of methods using total variation regularization. Additionally, the toolbox has GPU-accelerated projection and back projection using the latest techniques and it has a modular design that facilitates the implementation of new algorithms. We present an overview of the structure and techniques used in the creation of the toolbox, together with two usage examples. The TIGRE Toolbox is released under an open source licence, encouraging people to contribute.
The following article is
Open access
Magnetic nanoparticles for cancer theranostics
Jaiden Hart
et al
2026
Biomed. Phys. Eng. Express
12
022001
View article
, Magnetic nanoparticles for cancer theranostics
PDF
, Magnetic nanoparticles for cancer theranostics
Magnetic nanoparticles (MNPs) have emerged as a powerful tool in cancer theranostics due to their unique size-dependent magnetic properties, surface functionalization capabilities, and responsiveness to external magnetic fields. This review outlines different types of MNPs, including those composed of pure metals, metal oxides, and metallic alloys, and highlights their size-dependent magnetic behavior, such as superparamagnetism and dynamic magnetizations. We also explore the critical role of surface modification strategies in enhancing MNPs’ biocompatibility, colloidal stability, and functional versatility for targeted biomedical applications. The applications of MNPs in cancer therapy are discussed, with a focus on magnetic hyperthermia, drug and gene delivery, and a combination of various therapies. Additionally, we examine their cancer diagnostic roles in imaging techniques such as magnetic resonance imaging (MRI) and magnetic particle imaging (MPI), and emerging magnetic biosensing technologies such as giant magnetoresistance (GMR), magnetic tunnel junction (MTJ), magnetic particle spectroscopy (MPS), and nuclear magnetic resonance (NMR)-based platforms. These advances collectively establish MNPs as key components in the future of personalized cancer diagnosis and treatment.
The following article is
Open access
Detecting early signs of patient deterioration at home using wearable sensors: a personalized anomaly detection approach
Sjoerd H Garssen
et al
2026
Biomed. Phys. Eng. Express
12
025058
View article
, Detecting early signs of patient deterioration at home using wearable sensors: a personalized anomaly detection approach
PDF
, Detecting early signs of patient deterioration at home using wearable sensors: a personalized anomaly detection approach
In the rapidly emerging field of transferring healthcare from hospitals to patients’ homes, it is essential that signs of remote deterioration at home should be detected early to timely upscale care when needed. As patient deterioration is a rare event, datasets have insufficient event data for supervised machine learning. Personalized anomaly detection (AD) may be a good alternative. Therefore, this study aimed to obtain insight into the use of personalized AD models in detecting early signs of patient deterioration at home using wearable sensor data. To address this, Isolation Forest and Local Outlier Factor models were applied to detect signs of deterioration in terms of mortality or unplanned readmission in the 24 preceding hours in two datasets: one of heterogeneous patients, the other of postoperative patients. A pipeline was developed for continuously updating personalized AD models over time and applying them to detection windows twice per day. Results were compared with the Remote Early Warning Score. Isolation Forest (AUROC: 0.69) and Local Outlier Factor (AUROC: 0.67) models were able to find some early signs of deterioration in heterogeneous patients (n = 113). However, in postoperative patients (n = 193), Isolation Forest (AUROC: 0.41) and Local Outlier Factor (AUROC 0.44) performed badly. The Remote Early Warning Score was able to find some early signs of deterioration for both groups (AUROC: 0.63–0.76). Based on these findings, three requirements were formulated that should be fulfilled for a potentially successful application of personalized AD. First, the training set should be normal and representative of non-deteriorating patients. Second, signs of deterioration should exhibit abnormal characteristics. Third, non-deteriorating patients should not exhibit abnormal characteristics.
The following article is
Open access
Fetal monitoring technologies for the detection of intrapartum hypoxia - challenges and opportunities
Nadia Muhammad Hussain
et al
2024
Biomed. Phys. Eng. Express
10
022002
View article
, Fetal monitoring technologies for the detection of intrapartum hypoxia - challenges and opportunities
PDF
, Fetal monitoring technologies for the detection of intrapartum hypoxia - challenges and opportunities
Intrapartum fetal hypoxia is related to long-term morbidity and mortality of the fetus and the mother. Fetal surveillance is extremely important to minimize the adverse outcomes arising from fetal hypoxia during labour. Several methods have been used in current clinical practice to monitor fetal well-being. For instance, biophysical technologies including cardiotocography, ST-analysis adjunct to cardiotocography, and Doppler ultrasound are used for intrapartum fetal monitoring. However, these technologies result in a high false-positive rate and increased obstetric interventions during labour. Alternatively, biochemical-based technologies including fetal scalp blood sampling and fetal pulse oximetry are used to identify metabolic acidosis and oxygen deprivation resulting from fetal hypoxia. These technologies neither improve clinical outcomes nor reduce unnecessary interventions during labour. Also, there is a need to link the physiological changes during fetal hypoxia to fetal monitoring technologies. The objective of this article is to assess the clinical background of fetal hypoxia and to review existing monitoring technologies for the detection and monitoring of fetal hypoxia. A comprehensive review has been made to predict fetal hypoxia using computational and machine-learning algorithms. The detection of more specific biomarkers or new sensing technologies is also reviewed which may help in the enhancement of the reliability of continuous fetal monitoring and may result in the accurate detection of intrapartum fetal hypoxia.
The following article is
Open access
NEMA NU 2-2018 evaluation and image quality optimization of a new generation digital 32-cm axial field-of-view Omni Legend PET-CT using a genetic evolutionary algorithm
Rhodri Lyn Smith
et al
2024
Biomed. Phys. Eng. Express
10
025032
View article
, NEMA NU 2-2018 evaluation and image quality optimization of a new generation digital 32-cm axial field-of-view Omni Legend PET-CT using a genetic evolutionary algorithm
PDF
, NEMA NU 2-2018 evaluation and image quality optimization of a new generation digital 32-cm axial field-of-view Omni Legend PET-CT using a genetic evolutionary algorithm
A performance evaluation was conducted on the new General Electric (GE) digital Omni Legend PET-CT system with 32 cm extended field of view. The first commercially available clinical digital bismuth germanate system. The system does not use time of flight (ToF). Testing was performed in accordance with the NEMA NU2–2018 standard. A comparison was made between two other commercial GE scanners with extended fields of view; the Discovery MI − 6 ring (ToF enabled) and the Discovery IQ (non-ToF). A genetic evolutionary algorithm was developed to optimize image reconstruction parameters from image quality assessments. The Omni demonstrated average spatial resolutions at 1 cm radial offset as 3.9 mm FWHM. The total system sensitivity at the center was 44.36 cps/kBq. The peak NECR was measured as 501 kcps at 17.8 kBq ml
−1
with a 35.48% scatter fraction. The maximum count-rate error below NECR peak was 5.5%. Using standard iterative reconstructions, sphere contrast recovery coefficients were from 52.7 ± 3.2% (10 mm) to 92.5 ± 2.4% (37 mm). The PET-CT co-registration accuracy was 2.4 mm. In place of ToF, the Omni employs software corrections through a pre-trained neural network (PDL) (trained on non-ToF to ToF) that takes Bayesian penalized likelihood reconstruction (Q.Clear) images as input. The optimum parameters for image reconstruction, determined using the genetic algorithm were a Q.Clear parameter,
, of 350 and a ‘medium’ PDL setting. Using standard iterative reconstructions, the Omni initially showed increased background variability compared to the Discovery MI. With optimized PDL reconstruction parameters selected using the genetic algorithm the performance of the Omni surpassed that of the Discovery MI on all NEMA tests. The genetic algorithm’s demonstrated ability to enhance image quality in PET-CT imaging underscores the importance of algorithm driven optimization and underscores the requirement to validate its use in the clinical setting.
The following article is
Open access
The future of bone regeneration: integrating AI into tissue engineering
Benita S Mackay
et al
2021
Biomed. Phys. Eng. Express
052002
View article
, The future of bone regeneration: integrating AI into tissue engineering
PDF
, The future of bone regeneration: integrating AI into tissue engineering
Tissue engineering is a branch of regenerative medicine that harnesses biomaterial and stem cell research to utilise the body’s natural healing responses to regenerate tissue and organs. There remain many unanswered questions in tissue engineering, with optimal biomaterial designs still to be developed and a lack of adequate stem cell knowledge limiting successful application. Advances in artificial intelligence (AI), and deep learning specifically, offer the potential to improve both scientific understanding and clinical outcomes in regenerative medicine. With enhanced perception of how to integrate artificial intelligence into current research and clinical practice, AI offers an invaluable tool to improve patient outcome.
The following article is
Open access
A mechanistic investigation of the oxygen fixation hypothesis and oxygen enhancement ratio
David Robert Grimes and Mike Partridge 2015
Biomed. Phys. Eng. Express
045209
View article
, A mechanistic investigation of the oxygen fixation hypothesis and oxygen enhancement ratio
PDF
, A mechanistic investigation of the oxygen fixation hypothesis and oxygen enhancement ratio
The presence of oxygen in tumours has substantial impact on treatment outcome; relative to anoxic regions, well-oxygenated cells respond better to radiotherapy by a factor 2.5–3. This increased radio-response is known as the oxygen enhancement ratio. The oxygen effect is most commonly explained by the oxygen fixation hypothesis, which postulates that radical-induced DNA damage can be permanently ‘fixed’ by molecular oxygen, rendering DNA damage irreparable. While this oxygen effect is important in both existing therapy and for future modalities such a radiation dose-painting, the majority of existing mathematical models for oxygen enhancement are empirical rather than based on the underlying physics and radiochemistry. Here we propose a model of oxygen-enhanced damage from physical first principles, investigating factors that might influence the cell kill. This is fitted to a range of experimental oxygen curves from literature and shown to describe them well, yielding a single robust term for oxygen interaction obtained. The model also reveals a small thermal dependency exists but that this is unlikely to be exploitable.
The following article is
Open access
Differentiating long QT syndrome genotypes using electrocardiographic geometric parameterization and machine learning approaches
Martina Srutova
et al
2026
Biomed. Phys. Eng. Express
12
025067
View article
, Differentiating long QT syndrome genotypes using electrocardiographic geometric parameterization and machine learning approaches
PDF
, Differentiating long QT syndrome genotypes using electrocardiographic geometric parameterization and machine learning approaches
Long QT Syndrome (LQTS) is an inherited cardiac disorder characterized by dysfunctional cardiac ion channels, which result in prolonged QT intervals on electrocardiograms (ECGs). LQTS can lead to severe clinical manifestations, including syncope, ventricular arrhythmias, and sudden cardiac death. Effective genotype-specific management strategies are essential to mitigate the risk of life-threatening arrhythmias. This study aims to achieve automatic discrimination among the LQT1, LQT2, and LQT3 genotypes to enable targeted treatment and prevention strategies. Utilizing ECG data from the Telemetric and Holter ECG Warehouse’s LQTS database, our methodology involves an automated extraction process of short ECG signals, geometric parameterization techniques, and classification using a two-stage cascade of binary support vector machine classifiers. The input features for the classifiers are derived from Lead I ECG signals sampled at 200 Hz, highlighting the potential application in developing single-lead ECG monitoring devices and applications, such as widely used smartwatches, for which short recording duration, low sampling frequency, and arm-to-arm lead measurements are fundamental prerequisites for practical use. The proposed classifier achieved 71% weighted accuracy on out-of-sample data (LQT1: 65% recall, 58% precision; LQT2: 79% recall, 82% precision; and LQT3: 71% recall, 77% precision). Our findings demonstrate the feasibility of noninvasive genotype differentiation for LQTS based on the morphological analysis of ECG signals, providing an advancement in the field of personalized cardiology and the development of portable diagnostic tools.
The following article is
Open access
Patient outcome prognosis for external beam radiation therapy using CBCT-based radiomics: a systematic review
Chih-Wei Chang
et al
2026
Biomed. Phys. Eng. Express
12
012002
View article
, Patient outcome prognosis for external beam radiation therapy using CBCT-based radiomics: a systematic review
PDF
, Patient outcome prognosis for external beam radiation therapy using CBCT-based radiomics: a systematic review
Objective
. This review investigates the use of cone-beam computed tomography (CBCT) in conjunction with radiomics for external beam radiation therapy (EBRT) in cancer treatment. CBCT, which provides high-resolution, volumetric images, offers a promising tool for precision treatment delivery. By integrating radiomics and quantitative features extracted from CBCT, this review explores potential advancements in tumor characterization, treatment planning, and monitoring treatment responses in personalized cancer therapy.
Approach
. We conducted this systematic review using the PRISMA (preferred reporting items for systematic reviews and meta-analyses) framework. This study focused on CBCT-only radiomics applications, examining publications in PubMed, Embase, and Scopus databases. The inclusion criteria were strictly peer-reviewed journal articles, resulting in 29 studies being selected for analysis. These studies were divided into two main categories: (1) method development for treatment outcome prediction; (2) verification, validation, and uncertainty quantification (VVUQ) for CBCT-based radiomics.
Main Results
. The literature encompasses a range of investigations into CBCT-based radiomics for EBRT, covering different cancer types such as head-and-neck squamous cell carcinoma, non-small cell lung cancer, esophageal squamous cell cancer, hepatocellular carcinoma, prostate cancer, and rectal cancer. These studies used radiomics to predict outcomes including tumor response, local failure, tissue toxicity, and patient survival. VVUQ studies addressed the robustness and reproducibility of radiomic features. Furthermore, the emerging field of 4D-CBCT radiomics shows potential in improving image quality.
Significance
. CBCT-based radiomics presents a promising advancement in personalized radiotherapy, allowing for enhanced cancer prognosis and treatment adaptation. However, challenges of imaging quality and acquisition need to be addressed to ensure consistency and reliability. Future research should focus on standardizing imaging protocols and incorporating multi-institutional collaborations to further validate the clinical applicability of CBCT-based radiomics. Integration of this technology can potentially induce a paradigm shift in personalized cancer radiotherapy. New technologies promise to make CBCT even more valuable in the future.
The following article is
Open access
A Taguchi-optimized agar phantom for temperature-based validation in electro-hyperthermia
Chih-Wu Cheng
et al
2026
Biomed. Phys. Eng. Express
12
035010
View article
, A Taguchi-optimized agar phantom for temperature-based validation in electro-hyperthermia
PDF
, A Taguchi-optimized agar phantom for temperature-based validation in electro-hyperthermia
Objective
. This study aimed to develop and optimize an agar-based phantom using the Taguchi method for temperature-based validation of electro-hyperthermia systems in a quality assurance (QA) framework.
Materials and methods
. This study utilized the Taguchi method with an orthogonal array to design nine agar phantom formulations with varying concentrations of three factors: agar, sodium chloride (NaCl), and sodium azide (NaN₃). Heating was performed using the Oncotherm EHY-2000 RF hyperthermia device under six different power levels (25–100 W), with each stage lasting 5 min for a total duration of 30 min. Temperature changes were measured at a depth of 5 cm within the phantom using a type T thermocouple thermometer. A pork tissue model was used as the reference standard for comparison.
Results.
All phantom formulations exhibited a linear increase in temperature during the heating process. Analysis using Minitab software identified the optimal formulation, consisting of 5.0% agar, 0.1% NaCl, and 0.44% NaN₃, as producing a temperature profile most closely resembling that of the pork tissue model. Taguchi analysis indicated that agar concentration was the most significant factor influencing temperature variation. The interactions among the three formulation variables were weak, suggesting that each factor could be optimized independently.
Significance.
The agar phantom developed in this study features simple fabrication, reusability, and long-term storability, making it a practical tool for detecting abnormal heating in hyperthermia devices and enhancing treatment safety and accuracy. It is suited for routine QA of the Oncotherm electro-hyperthermia system.
Evaluation of alternative scintillation crystals to improve performance and cost-effectiveness in total-body PET scanner
Maryam Ghanbari Khanqah
et al
2026
Biomed. Phys. Eng. Express
12
035009
View article
, Evaluation of alternative scintillation crystals to improve performance and cost-effectiveness in total-body PET scanner
PDF
, Evaluation of alternative scintillation crystals to improve performance and cost-effectiveness in total-body PET scanner
Introduction.
Total-body PET scanners are advanced technologies in medical imaging, enabling whole-body imaging with high precision and sensitivity. They reduce radioactive dose and shorten imaging time. One of the most advanced systems is the Biograph Vision Quadra by Siemens Healthineers, representing the new generation of this technology. With LSO crystals and an axial field of view of 106 cm, it provides complete body imaging in a single scan, improving efficiency and safety. However, their high cost remains a challenge; prices typically range from 3 to 5 million dollars. To reduce costs and improve performance, alternative scintillation crystals such as LYSO, BGO, and NaI (Tl) have been investigated due to their properties of light sensitivity, decay time, and production cost.
Materials and methods.
The performance of the Biograph Vision Quadra was evaluated using the GATE Monte Carlo simulation tool. Simulations followed NEMA NU 2–2018 standards to assess parameters including image resolution, sensitivity, count rate, and scatter fraction. Data were reconstructed and analyzed using CASToR and AMIDE software.
Results.
The simulations closely matched experimental data, showing an error rate of about 8% with clinical data, confirming their accuracy. Among the crystals, BGO proved the most cost-effective, offering 19% higher sensitivity than LSO, LYSO, and NaI (Tl), while maintaining adequate spatial resolution and NECR. Although its light output is lower than LSO, BGO’s cost-effectiveness and good performance in sensitivity and count rate make it a suitable choice for reducing scanner costs.
Conclusion.
Replacing LSO with BGO can significantly improve the cost-effectiveness of the Biograph Vision Quadra without compromising image quality. This study highlights the potential for optimizing PET scanners through alternative scintillation materials, improving accessibility and supporting early disease detection in clinical practice.
The following article is
Open access
Structural identifiability of single-beat estimation of the ventricular end-systolic pressure–volume relationship
Fabijan Lulić
et al
2026
Biomed. Phys. Eng. Express
12
035008
View article
, Structural identifiability of single-beat estimation of the ventricular end-systolic pressure–volume relationship
PDF
, Structural identifiability of single-beat estimation of the ventricular end-systolic pressure–volume relationship
Single-beat (SB) approaches for estimating the end-systolic pressure–volume relationship (ESPVR) from a single pressure–volume (
) loop are widely used in experimental and clinical research, yet their structural foundations remain insufficiently formalized. ESPVR is a phenomenological construct defined from multi-beat (MB) measurements across varying loading conditions, and its SB estimation therefore constitutes an inverse problem. We show that SB estimation of the ESPVR slope (
es
) is intrinsically underdetermined, as a single
loop provides fewer independent constraints than unknown parameters. Consequently, any SB method requires auxiliary information to achieve mathematical closure. Using nine high-fidelity
loops obtained during vena cava occlusion in a porcine model as a demonstrator dataset, we quantify (i) the sensitivity of the MB-derived
es,MB
to the selection of the
loop subset, and (ii) the sensitivity of
es
to representative classes of auxiliary information. The analysis reveals that auxiliary information based solely on population-averaged normalized elastance curves is structurally inconsistent with the MB reference definition. Among the examined candidates, the normalized elastance at the onset of ejection (
N,dia
) exhibits the most favorable structural properties, combining low sensitivity of
es
to estimation errors with a strong empirical association captured by regression modeling. By reframing SB estimation of ESPVR as a structural identifiability problem rather than a purely numerical task, this study establishes criteria for physiologically consistent auxiliary relations and highlights the necessity of large, standardized MB databases for future data-driven SB methodologies.
Kali MC: an open-source toolkit for intraoperative electron radiation therapy treatment planning
Rafael Ayala
et al
2026
Biomed. Phys. Eng. Express
12
037001
View article
, Kali MC: an open-source toolkit for intraoperative electron radiation therapy treatment planning
PDF
, Kali MC: an open-source toolkit for intraoperative electron radiation therapy treatment planning
Intraoperative electron radiation therapy (IOERT) is commonly delivered without intraoperative imaging, limiting patient-specific planning and requiring fast, reliable treatment calculations in the operating room. As a result, monitor unit (MU) calculations are often performed using spreadsheets and static look-up tables derived from measurements in water. We present Kali MC, an open source, Python-based software developed to streamline IOERT absorbed dose visualization, MU calculation and report generation for treatments delivered with the Liac HWL mobile accelerator. The software provides a graphical interface to visualize precomputed Monte Carlo absorbed dose distributions in water and performs MU calculations using experimentally determined output factors. Optional atmospheric pressure correction and predefined rescaling factors are supported to improve consistency of dose delivery and target coverage with the prescription isodose. Kali MC also generates customizable PDF treatment reports and exports DICOM RT Plan objects to facilitate integration with record-and-verify systems. By integrating validated dosimetric data, correction methods, and 3D dose visualization into a transparent platform, Kali MC supports efficient intraoperative decision-making; however, its use should be locally commissioned and evaluated according to institution-specific procedures and equipment characteristics.
Synergistic effects of plaque geometry and composition on coronary hemodynamics and mechanical stability: a multiscale computational study
Yinghong Zhao
et al
2026
Biomed. Phys. Eng. Express
12
035007
View article
, Synergistic effects of plaque geometry and composition on coronary hemodynamics and mechanical stability: a multiscale computational study
PDF
, Synergistic effects of plaque geometry and composition on coronary hemodynamics and mechanical stability: a multiscale computational study
Cardiovascular disease remains the leading cause of global mortality, with the rupture of vulnerable atherosclerotic plaques accounting for the majority of acute myocardial infarctions. While plaque morphology and composition are well recognized as critical determinants of vulnerability, their combined effects across clinically relevant stenosis severities (50%–80%) remain incompletely understood. To address this gap, this study aimed to systematically investigate the collective influence of plaque geometry (eccentric vs. concentric) and material composition (lipid, fibrous, calcified) on coronary hemodynamics and mechanical plaque stability under identical stenosis conditions. The left anterior descending artery were reconstructed using clinical computed tomography angiography data, and key hemodynamic (wall shear stress [WSS], oscillatory shear index [OSI], relative residence time [RRT]) and structural metrics (plaque von Mises stress and deformation) were quantified via coupled computational fluid dynamics and fluid-structure interaction simulations. The results demonstrated that eccentric plaques induced significantly more pronounced asymmetric flow disturbances, steeper WSS gradients, and higher RRT values compared to concentric geometries, particularly at higher stenosis severities; notably, at 70% stenosis, the mean RRT of eccentric plaques (0.108 65) was nearly double that of concentric plaques (0.056 86), and eccentric plaques exhibited a unique low-oscillatory shear environment with upstream mean OSI reduced to 0.141 01, whereas concentric plaques showed upstream mean OSI elevated to 0.256 07. Compositionally, lipid-rich regions experienced the greatest deformation, highlighting their role as mechanical ‘weak spots,’ whereas calcified areas showed minimal deformation but generated interfacial stress concentrations. These findings elucidate the synergistic interaction between plaque geometry and composition in modulating coronary hemodynamics and mechanical integrity, with eccentric morphology exacerbating adverse biomechanical conditions as stenosis progresses. This study provides a novel, multiscale biomechanical framework for assessing plaque vulnerability and informs the development of personalized intervention strategies tailored to specific plaque characteristics.
Cervical implant fixation: a topical review of techniques and their importance
Subhasish Halder
et al
2026
Biomed. Phys. Eng. Express
12
022002
View article
, Cervical implant fixation: a topical review of techniques and their importance
PDF
, Cervical implant fixation: a topical review of techniques and their importance
Cervical implant fixation is a critical surgical intervention for stabilizing the cervical spine, often necessitated by trauma, degenerative diseases, or spinal deformities. While spinal disc disease has historically been treated with fusion-based procedures, there has been a recent surge of interest in motion-preserving disc arthroplasties. The present study provides a topical narrative review of selected and recent literature on cervical implant fixation techniques, including anterior and posterior approaches, implant materials, biomechanical considerations, and reported clinical outcomes. Traditional fusion-based procedures have long been used to treat cervical disc disease, while recent years have seen increasing interest in motion-preserving techniques such as cervical disc arthroplasty. Developments in implant design and fixation strategies have contributed to improved radiographic and functional results compared with earlier systems, although each technique presents specific benefits and limitations. Cervical implant fixation has evolved into a highly sophisticated discipline that includes anterior, posterior, and motion-preserving techniques for treating a wide range of spinal conditions. This review summarises recent advances, common complications, and emerging trends in cervical fixation, and highlights existing research gaps to support future investigation and clinical decision-making.
The following article is
Open access
Magnetic nanoparticles for cancer theranostics
Jaiden Hart
et al
2026
Biomed. Phys. Eng. Express
12
022001
View article
, Magnetic nanoparticles for cancer theranostics
PDF
, Magnetic nanoparticles for cancer theranostics
Magnetic nanoparticles (MNPs) have emerged as a powerful tool in cancer theranostics due to their unique size-dependent magnetic properties, surface functionalization capabilities, and responsiveness to external magnetic fields. This review outlines different types of MNPs, including those composed of pure metals, metal oxides, and metallic alloys, and highlights their size-dependent magnetic behavior, such as superparamagnetism and dynamic magnetizations. We also explore the critical role of surface modification strategies in enhancing MNPs’ biocompatibility, colloidal stability, and functional versatility for targeted biomedical applications. The applications of MNPs in cancer therapy are discussed, with a focus on magnetic hyperthermia, drug and gene delivery, and a combination of various therapies. Additionally, we examine their cancer diagnostic roles in imaging techniques such as magnetic resonance imaging (MRI) and magnetic particle imaging (MPI), and emerging magnetic biosensing technologies such as giant magnetoresistance (GMR), magnetic tunnel junction (MTJ), magnetic particle spectroscopy (MPS), and nuclear magnetic resonance (NMR)-based platforms. These advances collectively establish MNPs as key components in the future of personalized cancer diagnosis and treatment.
Emerging innovations in polycaprolactone-chitosan-hydroxyapatite composite scaffolds for tissue engineering: a review
Mohammed Razzaq Mohammed 2026
Biomed. Phys. Eng. Express
12
012004
View article
, Emerging innovations in polycaprolactone-chitosan-hydroxyapatite composite scaffolds for tissue engineering: a review
PDF
, Emerging innovations in polycaprolactone-chitosan-hydroxyapatite composite scaffolds for tissue engineering: a review
Polycaprolactone (PCL), chitosan (CS), and hydroxyapatite (HA) have emerged as complementary biomaterials for the design of advanced scaffolds in tissue engineering (TE). Individually, PCL offers excellent mechanical strength and formability but suffers from hydrophobicity and slow degradation. CS provides biocompatibility, antibacterial properties, and favorable cell–material interactions, yet its insufficient mechanical stability limits standalone use. HA, a bioactive ceramic, enhances osteoconductivity; nevertheless, it is brittle in pure form. Recent advances focus on integrating these three components into hybrid composites to harness their desired characteristics. Novel fabrication approaches, including electrospinning and 3D printing have been optimized to tailor scaffold architecture, porosity, and mechanical integrity. Studies highlight enhanced cellular adhesion and differentiation, as well as improved angiogenic and antibacterial performance when functionalized with bioactive agents or nanoparticles. For instance, the incorporation of nano-HA into the PCL/CS scaffolds markedly boosted skin fibroblast cells (HSF 1184) proliferation, yielding a 23% increase compared to PCL/CS scaffolds by day 3. Besides, HA-PCL/CS nanofibrous composite scaffolds demonstrated a marked improvement in mechanical stiffness, showing an increase of greater than 15% in modulus of elasticity compared to the PCL/CS scaffold. Despite these advances, challenges remain in achieving controlled degradation, uniform dispersion of components, and scalable, reproducible fabrication for clinical translation. This current review fills a critical gap by providing the first comprehensive analysis of advancements in PCL-CS-HA ternary TE systems, an area that remains unexplored despite existing reviews on individual materials and their binary combinations. It analyzes latest developments in PCL-CS-HA composites, highlighting their structure, characteristics, processing strategies, biological outcomes, and future directions.
Emerging roles and mechanisms of nanoparticles in cancer treatment: innovations and horizons
Asmat Ullah
et al
2026
Biomed. Phys. Eng. Express
12
012003
View article
, Emerging roles and mechanisms of nanoparticles in cancer treatment: innovations and horizons
PDF
, Emerging roles and mechanisms of nanoparticles in cancer treatment: innovations and horizons
On a global scale, cancer ranks high in mortality rate. There is a need for better technology since the current treatments are insufficient. Several new cancer treatments have been developed directly from the lab to the clinic; however, the manufacturing of nanomedicine products, made possible by the rapid expansion of nanotechnology, holds enormous potential for enhancing cancer treatment approaches. The advent of nanotechnology has opened the door to the possibility of multi-functionality and very precise targeting strategies. They have the potential to enhance the pharmacodynamic and pharmacokinetic profiles of conventional treatment approaches, potentially leading to a reevaluation of the effectiveness of current anti-cancer drugs. A novel technique to enhance traditional onco-immunotherapies, recruiting nanoparticle-based delivery systems, which are adaptable carriers for a broad range of molecular payloads. The delivery of molecular payloads to the target site and their release may be well-regulated. We summarize the latest developments in nanobiotechnology for improving immunotherapies and reshaping tumour microenvironments (TMEs) in this review. The current clinical challenges that impede the real-time implementation of cancer nanomedicine are discussed, and this review study consolidates existing knowledge and recent advancements in the use of nanoparticles for cancer therapy. This provides researchers, clinicians, and students with a comprehensive understanding of the current state of the field. Finally, potential future directions are highlighted to enhance the therapeutic efficacy and facilitate the clinical translation of cancer nanomedicine.
The following article is
Open access
Patient outcome prognosis for external beam radiation therapy using CBCT-based radiomics: a systematic review
Chih-Wei Chang
et al
2026
Biomed. Phys. Eng. Express
12
012002
View article
, Patient outcome prognosis for external beam radiation therapy using CBCT-based radiomics: a systematic review
PDF
, Patient outcome prognosis for external beam radiation therapy using CBCT-based radiomics: a systematic review
Objective
. This review investigates the use of cone-beam computed tomography (CBCT) in conjunction with radiomics for external beam radiation therapy (EBRT) in cancer treatment. CBCT, which provides high-resolution, volumetric images, offers a promising tool for precision treatment delivery. By integrating radiomics and quantitative features extracted from CBCT, this review explores potential advancements in tumor characterization, treatment planning, and monitoring treatment responses in personalized cancer therapy.
Approach
. We conducted this systematic review using the PRISMA (preferred reporting items for systematic reviews and meta-analyses) framework. This study focused on CBCT-only radiomics applications, examining publications in PubMed, Embase, and Scopus databases. The inclusion criteria were strictly peer-reviewed journal articles, resulting in 29 studies being selected for analysis. These studies were divided into two main categories: (1) method development for treatment outcome prediction; (2) verification, validation, and uncertainty quantification (VVUQ) for CBCT-based radiomics.
Main Results
. The literature encompasses a range of investigations into CBCT-based radiomics for EBRT, covering different cancer types such as head-and-neck squamous cell carcinoma, non-small cell lung cancer, esophageal squamous cell cancer, hepatocellular carcinoma, prostate cancer, and rectal cancer. These studies used radiomics to predict outcomes including tumor response, local failure, tissue toxicity, and patient survival. VVUQ studies addressed the robustness and reproducibility of radiomic features. Furthermore, the emerging field of 4D-CBCT radiomics shows potential in improving image quality.
Significance
. CBCT-based radiomics presents a promising advancement in personalized radiotherapy, allowing for enhanced cancer prognosis and treatment adaptation. However, challenges of imaging quality and acquisition need to be addressed to ensure consistency and reliability. Future research should focus on standardizing imaging protocols and incorporating multi-institutional collaborations to further validate the clinical applicability of CBCT-based radiomics. Integration of this technology can potentially induce a paradigm shift in personalized cancer radiotherapy. New technologies promise to make CBCT even more valuable in the future.
Bacterial cell division is involved in the damage of gram-negative bacteria on a nano-pillar titanium surface
Manfred Köller
et al
2018
Biomed. Phys. Eng. Express
055002
View article
, Bacterial cell division is involved in the damage of gram-negative bacteria on a nano-pillar titanium surface
PDF
, Bacterial cell division is involved in the damage of gram-negative bacteria on a nano-pillar titanium surface
The role of bacterial cell division on the damage of adherent bacteria to titanium (Ti) nano-pillar cicada wing like surface was analyzed. Therefore nano-pillar Ti thin films were fabricated by glancing angle sputter deposition (GLAD) on silicon substrates. Gram-negative
E. coli
bacteria were allowed to adhere and to proliferate on these nanostructured samples for 3 h at 37 °C either under optimal cell growth conditions (brain heart infusion medium, BHI) or limited growth conditions (RPMI1640 medium). The bacteria adhered to the samples in both media. Compared to BHI medium the growth of
E. coli
in RPMI1640 medium was significantly inhibited. Concomitantly, the ratio of dead/living adherent bacteria on the nano-pillar surface was significantly decreased after the incubation period in RPMI1640. In addition, when the bacterial proliferation was biochemically halted using DL-serine-hydroxamate a comparable decrease in the ratio of dead/living adherent bacteria was also obtained in BHI medium. These results indicate that cell growth of adherent
E. coli
which is accompanied by cell elongations of the rod structure is involved in the damage induced by the titanium nano-pillar surface.
A comparison of temporal Cherenkov separation techniques in pulsed signal scintillator dosimetry
James Archer
et al
2018
Biomed. Phys. Eng. Express
044003
View article
, A comparison of temporal Cherenkov separation techniques in pulsed signal scintillator dosimetry
PDF
, A comparison of temporal Cherenkov separation techniques in pulsed signal scintillator dosimetry
Cherenkov radiation is the primary source of unwanted light in a scintillator dosimetry system. In this work we compare two techniques for temporally separating Cherenkov radiation from a slow scintillator signal. These techniques are applicable to a pulsed radiation beam. We found that by analysing the rising edge of the light pulse to identify the fast Cherenkov light only removed 74% of the Cherenkov light. By integrating the tail of the signal where only scintillation light is present a more accurate result is achieved. The average of the results of the two methods provides up to a 90% improvement in the accuracy of the relative dose when compared to ionisation chamber, in certain measurements. This work demonstrates an alternative methodology for the removal of Cherenkov light using signal analysis, while preserving all the scintillation light signal and minimising the bulk of the experimental equipment.
Mesenchymal stem cells cultivated on scaffolds formed by 3D printed PCL matrices, coated with PLGA electrospun nanofibers for use in tissue engineering
Natasha Maurmann
et al
2017
Biomed. Phys. Eng. Express
045005
View article
, Mesenchymal stem cells cultivated on scaffolds formed by 3D printed PCL matrices, coated with PLGA electrospun nanofibers for use in tissue engineering
PDF
, Mesenchymal stem cells cultivated on scaffolds formed by 3D printed PCL matrices, coated with PLGA electrospun nanofibers for use in tissue engineering
Materials, such as biopolymers, can be applied to produce scaffolds as mechanical support for cell growth in regenerative medicine. Two examples are polycaprolactone (PCL) and poly (lactic-co-glycolic acid) (PLGA), both used in this study to evaluate the behavior of umbilical cord-derived mesenchymal stem cells. The scaffolds were produced by the 3D printing technique using PCL as a polymer covered with PLGA fibers obtained by electrospinning. The cells were seeded in three concentrations: 8.5 × 10
; 25.5 × 10
and 51.0 × 10
on the two surfaces of the scaffolds. With scanning electron microscopy (SEM), it was observed that the electrospun fibers were integrated into the 3D printed matrices. Confocal laser scanning microscopy and SEM confirmed the presence of attached cells and the lactate dehydrogenase release test showed the scaffolds were not cytotoxic. The cells were able to differentiate into osteogenic and chondrogenic lineages on the scaffolds. Mechanical test showed that the cells seeded on the 3D printed PCL matrices coated with PLGA electrospun nanofibers (3D + ES + SC) did not show significant difference in tensile modulus than the pure PCL matrix (3D) or PCL matrices coated with PLGA electrospun nanofibers (3D + ES). The combination of the two polymers facilitated the production of a support with greater mechanical stability due to the presence of the 3D printed PCL matrices fabricated by melted filaments and greater cell adhesion due to the PLGA fibers. The scaffolds are suitable for use in cell therapy and also for tissue regeneration purposes.
The following article is
Open access
Cluster dose prediction in carbon ion therapy: Using transfer learning from a pretrained dose prediction U-Net - A proof of concept
Schwarze et al
View accepted manuscript
, Cluster dose prediction in carbon ion therapy: Using transfer learning from a pretrained dose prediction U-Net - A proof of concept
PDF
, Cluster dose prediction in carbon ion therapy: Using transfer learning from a pretrained dose prediction U-Net - A proof of concept
The cluster dose concept offers an alternative to the radiobiological effectiveness (RBE)based model for describing radiation-induced biological effects. This study examines the application of a neural network to predict cluster dose distributions, with the goal of replacing the computationally intensive simulations currently required. Cluster dose distributions are predicted using a U-Net that was initially pretrained on conventional dose distributions. Using transfer learning techniques, the decoder path is adapted for cluster dose estimation. Both the training and pretraining datasets include head and neck regions from multiple patients and carbon ion beams of varying energies and positions. Monte Carlo (MC) simulations were used to generate the ground truth cluster dose distributions. The U-Net enables cluster dose estimation for a single pencil beam within milliseconds using a graphics processing unit (GPU). The predicted cluster dose distributions deviate from the ground truth by less than 0.35%. This proof-of-principle study demonstrates the feasibility of accurately estimating cluster doses within clinically acceptable computation times using machine learning (ML). By leveraging a pretrained neural network and applying transfer learning techniques, the approach significantly reduces the need for large-scale, computationally expensive training data.
SAM2HIPT: A hybrid deep learning framework integrating SAM2 and HIPT with joint loss optimization for immunohistochemical cell nucleus segmentation
Yao et al
View accepted manuscript
, SAM2HIPT: A hybrid deep learning framework integrating SAM2 and HIPT with joint loss optimization for immunohistochemical cell nucleus segmentation
PDF
, SAM2HIPT: A hybrid deep learning framework integrating SAM2 and HIPT with joint loss optimization for immunohistochemical cell nucleus segmentation
Nucleus segmentation in immunohistochemistry (IHC) images plays a critical role in cancer diagnosis and treatment assessment. However, existing methods remain limited in segmentation accuracy and boundary delineation due to staining heterogeneity, densely packed cell distributions, and complex background interference. To address these challenges, this paper proposes a two-stage nucleus segmentation framework, termed SAM2HIPT. In the first stage, the pre-trained Segment Anything Model 2(SAM2) is employed to generate initial segmentation predictions for input images, wherein the image encoder is kept frozen to preserve the pre-trained visual representation capacity while the mask decoder is fine-tuned to adapt to the characteristics of the pathological image domain; local texture, morphological, and boundary information are extracted through visual feature encoding to produce initial nucleus segmentation masks and spatial prior representations. In the second stage, the Hierarchical Image Pyramid Transformer(HIPT) is introduced to refine the initial segmentation results, performing multi-scale, multi-level feature representation and fusion of morphological, textural, and spatial structural information through a hierarchical vision Transformer architecture, thereby enhancing nuclear structural representation and boundary consistency. To enable collaborative optimization across both stages, a joint loss function is designed to impose unified constraints on segmentation accuracy and feature representation. Evaluated on two public histopathological benchmark datasets, BCData and DeepLIIF, the proposed method achieves Dice coefficients of 0.92 and 0.91, respectively, and HD95 boundary error values of 1.05 pixels and 1.10 pixels, demonstrating superior segmentation performance and robustness over multiple state-of-the-art baseline methods.
The following article is
Open access
Dosimetric evaluation of simultaneous multi-energy (2-18 MV) and intensity optimization for IMRT and VMAT
Rohani et al
View accepted manuscript
, Dosimetric evaluation of simultaneous multi-energy (2-18 MV) and intensity optimization for IMRT and VMAT
PDF
, Dosimetric evaluation of simultaneous multi-energy (2-18 MV) and intensity optimization for IMRT and VMAT
Objective. This study aimed to develop and evaluate a framework for the simultaneous optimization of beam fluence and multiple photon energies for IMRT and VMAT. Approach. An Elekta Versa HD linear accelerator (linac) was modeled using the BEAMnrc/EGSnrc Monte Carlo platform to simulate 2 MV, 4 MV, and 6 MV photon beams, complemented with measured beam data for 10 MV and 18 MV. A custom MATLAB-based framework was developed to perform simultaneous optimization of energy and fluence, allowing energy weighting of individual beamlets. A total of 12 various energy configurations including single (SE), dual (DE), triple (TE), and quadruple (QE)-energy combinations were generated for thirteen prostate cancer patients. Main results. Target coverage remained comparable across all energy configurations. Compared with 6 SE, multi-energy IMRT and VMAT achieved superior OAR sparing, with the greatest benefits for bladder and rectum in the 5–40 Gy range. TE-IMRT reduced bladder and rectal mean doses by up to 1.6 Gy, with V10 reductions of ~8%; on average, mean doses were reduced by 0.9 Gy (p = 0.001) and 1.0 Gy (p = 0.001). For VMAT, the 6&18 DE-VMAT plan yielded the largest sparing, lowering bladder and rectal mean doses by ~1.0 Gy. Relative to 10 SE, all multi-energy plans achieved equal or lower OAR doses, with TE-IMRT and 6&18 DE-VMAT showing the most consistent benefit, reducing mean doses to the bladder, rectum, right femoral head, and left femoral head by up to 1.26, 0.77, 0.6, and 0.55 Gy, respectively. Significance. This study presents the first assessment of the feasibility and potential clinical value of simultaneous optimization of beam intensity and more than two photon energies for IMRT and VMAT plans, producing optimized multi-energy fluence maps that require subsequent conversion into clinically deliverable treatment plans.
A modified head loss model with a correction factor for the human femoral artery
Nayak et al
View accepted manuscript
, A modified head loss model with a correction factor for the human femoral artery
PDF
, A modified head loss model with a correction factor for the human femoral artery
Understanding the hemodynamics of the human circulatory system is crucial for diagnosing and treating cardiovascular diseases. To this end, one of the critical vascular analyses involves determining head loss, or pressure drop, in arteries, as it provides substantial insight into vascular health and efficiency. Thus, this work presents an assessment of head loss in the human femoral artery, one of the major blood vessels in the lower body, comprising the common femoral artery, the deep femoral artery, and the superficial femoral artery, which extend into the popliteal artery. Our study modeled this arterial system as a network of elastic circular pipes, and using the proposed theories, we calculated the pressurized diameters and the head loss in each segment, considering minor losses arising from vessel curvature and geometric variations within the arterial network. Because the proposed theories rely on certain assumptions, to assess the validity of the theoretical predictions, a two-dimensional CFD model of an idealized human femoral artery was simulated using available clinical data on the model parameters. Pressurized diameters were computed using both the CFD model and the theoretical formulation and were statistically compared. The results showed a statistically significant difference between the two, highlighting the importance of accurately capturing the elastic behavior of the arterial wall. Accordingly, a modified pressurized diameter formulation incorporating a segment-specific correction factor was proposed based on the CFD results. Findings showed that this correction factor is greater for the deep femoral arterial segment compared to the other segments.
Enhancing CNN regressors with contour encoding and self-supervision for improved 3D/2D X-ray to CT registration in spinal surgery navigation
Zhang et al
View accepted manuscript
, Enhancing CNN regressors with contour encoding and self-supervision for improved 3D/2D X-ray to CT registration in spinal surgery navigation
PDF
, Enhancing CNN regressors with contour encoding and self-supervision for improved 3D/2D X-ray to CT registration in spinal surgery navigation
With advances in deep learning, regression-based methods have shown promising results in 3D/2D medical image registration. However, strict intraoperative radiation dose constraints produce low-dose X-ray images with severe blur and reduced contrast, significantly degrading registration accuracy and limiting precise image-guided spinal interventions. We propose the Contour Feature Encoding Regressor (CER), a novel end-to-end CNN framework that extracts highly discriminative features directly from binary contour masks of intraoperative X-rays without any restrictions on contour length, shape, or morphology. These contour features are efficiently encoded by a dedicated module and seamlessly fused into the regression pipeline to improve robustness against image degradation. To further enhance pose estimation, CER employs a dual-branch architecture that explicitly decouples rotational and translational parameters, thereby reducing mutual interference and improving overall accuracy. In addition, a self-supervised fine-tuning strategy with a tailored multi-component loss function is introduced to adapt the model to blurred low-dose conditions and minimize residual errors. On low-dose X-ray images, CER achieves a mean target registration error (mTRE) of 1.39 mm—a clinically acceptable threshold—while outperforming state-of-the-art methods in accuracy and enabling real-time performance (0.03--0.06 s per frame on clinically accessible GPUs). These improvements meet the stringent precision and speed requirements of intraoperative navigation, offering strong potential to enhance surgical safety and outcomes in minimally invasive spinal procedures.
More Accepted manuscripts
Trending on Altmetric
The following article is
Open access
A Taguchi-optimized agar phantom for temperature-based validation in electro-hyperthermia
Chih-Wu Cheng
et al
2026
Biomed. Phys. Eng. Express
12
035010
View article
, A Taguchi-optimized agar phantom for temperature-based validation in electro-hyperthermia
PDF
, A Taguchi-optimized agar phantom for temperature-based validation in electro-hyperthermia
Objective
. This study aimed to develop and optimize an agar-based phantom using the Taguchi method for temperature-based validation of electro-hyperthermia systems in a quality assurance (QA) framework.
Materials and methods
. This study utilized the Taguchi method with an orthogonal array to design nine agar phantom formulations with varying concentrations of three factors: agar, sodium chloride (NaCl), and sodium azide (NaN₃). Heating was performed using the Oncotherm EHY-2000 RF hyperthermia device under six different power levels (25–100 W), with each stage lasting 5 min for a total duration of 30 min. Temperature changes were measured at a depth of 5 cm within the phantom using a type T thermocouple thermometer. A pork tissue model was used as the reference standard for comparison.
Results.
All phantom formulations exhibited a linear increase in temperature during the heating process. Analysis using Minitab software identified the optimal formulation, consisting of 5.0% agar, 0.1% NaCl, and 0.44% NaN₃, as producing a temperature profile most closely resembling that of the pork tissue model. Taguchi analysis indicated that agar concentration was the most significant factor influencing temperature variation. The interactions among the three formulation variables were weak, suggesting that each factor could be optimized independently.
Significance.
The agar phantom developed in this study features simple fabrication, reusability, and long-term storability, making it a practical tool for detecting abnormal heating in hyperthermia devices and enhancing treatment safety and accuracy. It is suited for routine QA of the Oncotherm electro-hyperthermia system.
The following article is
Open access
Structural identifiability of single-beat estimation of the ventricular end-systolic pressure–volume relationship
Fabijan Lulić
et al
2026
Biomed. Phys. Eng. Express
12
035008
View article
, Structural identifiability of single-beat estimation of the ventricular end-systolic pressure–volume relationship
PDF
, Structural identifiability of single-beat estimation of the ventricular end-systolic pressure–volume relationship
Single-beat (SB) approaches for estimating the end-systolic pressure–volume relationship (ESPVR) from a single pressure–volume (
) loop are widely used in experimental and clinical research, yet their structural foundations remain insufficiently formalized. ESPVR is a phenomenological construct defined from multi-beat (MB) measurements across varying loading conditions, and its SB estimation therefore constitutes an inverse problem. We show that SB estimation of the ESPVR slope (
es
) is intrinsically underdetermined, as a single
loop provides fewer independent constraints than unknown parameters. Consequently, any SB method requires auxiliary information to achieve mathematical closure. Using nine high-fidelity
loops obtained during vena cava occlusion in a porcine model as a demonstrator dataset, we quantify (i) the sensitivity of the MB-derived
es,MB
to the selection of the
loop subset, and (ii) the sensitivity of
es
to representative classes of auxiliary information. The analysis reveals that auxiliary information based solely on population-averaged normalized elastance curves is structurally inconsistent with the MB reference definition. Among the examined candidates, the normalized elastance at the onset of ejection (
N,dia
) exhibits the most favorable structural properties, combining low sensitivity of
es
to estimation errors with a strong empirical association captured by regression modeling. By reframing SB estimation of ESPVR as a structural identifiability problem rather than a purely numerical task, this study establishes criteria for physiologically consistent auxiliary relations and highlights the necessity of large, standardized MB databases for future data-driven SB methodologies.
The following article is
Open access
Impact of multimodal information on the estimation of fetal growth indicators using machine learning regression models
Orlando Castellanos-Diaz
et al
2026
Biomed. Phys. Eng. Express
12
035005
View article
, Impact of multimodal information on the estimation of fetal growth indicators using machine learning regression models
PDF
, Impact of multimodal information on the estimation of fetal growth indicators using machine learning regression models
Accurate estimation of fetal growth indicators such as birth weight, birth length, and gestational age at birth is essential for monitoring pregnancy outcomes and guiding clinical decisions. Traditional predictive models typically rely on ultrasound-based fetometric data to estimate fetal weight or length. While valuable, these models provide estimates only at the time of measurement, rather than predicting values at birth, and may overlook important clinical and sociodemographic factors that also influence fetal growth. This study aimed to evaluate whether incorporating echographic, clinical, and sociodemographic features could improve the accuracy of predicting fetal growth indicators at birth and to quantify the contribution of each variable. Data from 154 cases were collected and processed for model development (61.5% for training and 38.5% for testing), divided into three feature sets: fetometric, clinical–sociodemographic, and combined clinical, echographic, and sociodemographic data. Six regression models were developed to predict three fetal growth indicators: birth weight, birth length, and gestational age at birth. Model performances were assessed using
, mean absolute error (MAE), and mean absolute percentage error (MAPE). The multimodal models significantly outperformed those relying only on fetometric or clinical–sociodemographic data, with the random forest achieving the best performance for birth weight
: 0.8991; MAE: 255.08 g; MAPE: 8.46%, birth length
: 0.8679; MAE: 7.21 cm; MAPE: 2.73%, and gestational age at birth
: 0.8886; MAE: 1.34 d; MAPE: 2.76%. Feature relevance analysis revealed that variables such as maternal height, maternal weight, placenta location, and alcohol consumption played substantial roles in prediction accuracy, alongside classic fetometric measurements such as head circumference. These findings highlight the multifactorial nature of fetal growth and demonstrate that integrating clinical and sociodemographic information enhances the performance of fetal growth prediction models, ultimately supporting improved perinatal care.
The following article is
Open access
Cluster dose prediction in carbon ion therapy: Using transfer learning from a pretrained dose prediction U-Net - A proof of concept
Miriam Schwarze
et al
2026
Biomed. Phys. Eng. Express
View article
, Cluster dose prediction in carbon ion therapy: Using transfer learning from a pretrained dose prediction U-Net - A proof of concept
PDF
, Cluster dose prediction in carbon ion therapy: Using transfer learning from a pretrained dose prediction U-Net - A proof of concept
The cluster dose concept offers an alternative to the radiobiological effectiveness (RBE)based model for describing radiation-induced biological effects. This study examines the application of a neural network to predict cluster dose distributions, with the goal of replacing the computationally intensive simulations currently required. Cluster dose distributions are predicted using a U-Net that was initially pretrained on conventional dose distributions. Using transfer learning techniques, the decoder path is adapted for cluster dose estimation. Both the training and pretraining datasets include head and neck regions from multiple patients and carbon ion beams of varying energies and positions. Monte Carlo (MC) simulations were used to generate the ground truth cluster dose distributions. The U-Net enables cluster dose estimation for a single pencil beam within milliseconds using a graphics processing unit (GPU). The predicted cluster dose distributions deviate from the ground truth by less than 0.35%. This proof-of-principle study demonstrates the feasibility of accurately estimating cluster doses within clinically acceptable computation times using machine learning (ML). By leveraging a pretrained neural network and applying transfer learning techniques, the approach significantly reduces the need for large-scale, computationally expensive training data.
The following article is
Open access
Circadian limbic and thalamic beta oscillations drive slow-adapting dual-threshold adaptive deep brain stimulation in Tourette syndrome
Rachel A Davis
et al
2026
Biomed. Phys. Eng. Express
12
027002
View article
, Circadian limbic and thalamic beta oscillations drive slow-adapting dual-threshold adaptive deep brain stimulation in Tourette syndrome
PDF
, Circadian limbic and thalamic beta oscillations drive slow-adapting dual-threshold adaptive deep brain stimulation in Tourette syndrome
Objective.
Patients with Tourette syndrome (TS) may benefit from lower DBS current during sleep to reduce side effects and habituation, yet manual adjustment can be challenging. We tested the feasibility of an adaptive deep brain stimulation (aDBS) paradigm in which circadian fluctuations in beta activity serve as a biomarker to reduce stimulation during sleep.
Approach.
We analyzed chronic beta-band local field potentials (LFPs) recorded from sensing-enabled DBS leads in bilateral ventral capsule/ventral striatum (VC/VS) and centromedian-parafascicular nucleus of the thalamus (CM-Pf) in a patient with refractory TS and obsessive-compulsive disorder. We used circadian beta fluctuations in the VC/VS and CM-Pf to drive a dual-threshold algorithm that we configured to function as a slowly adapting single-threshold system. Because the clinically optimal electrode configuration created sensing constraints, we linked VC/VS and CM-Pf within each hemisphere so beta activity from either target could trigger automatic nighttime stimulation reduction.
Results.
Multi-week recordings showed clear circadian beta rhythmicity in both VC/VS and CM-Pf. The slowly adapting single-threshold aDBS algorithm with cross-target tethering reduced stimulation during sleep while maintaining stable daytime stimulation. A 16-night period in which aDBS was inadvertently disabled created a direct comparison between aDBS and continuous DBS (cDBS). Clinical measures showed modest reductions in tic severity and substantial reductions in depression during aDBS compared with cDBS.
Significance.
This proof-of-concept study demonstrates: (1) circadian modulation of beta rhythms in limbic and thalamic targets, supporting the feasibility of leveraging these signals for adaptive DBS; (2) that cross-target tethering is feasible (i.e. adapting stimulation in the VC/VS off of sensing in the CM-Pf and vice versa); and (3) that a dual-threshold algorithm can be configured to enable gradual transitions between maximal and minimal stimulation. The patient’s observed improvements during circadian-driven aDBS compared with cDBS suggest potential clinical benefit, warranting testing in larger samples.
The following article is
Open access
Dosimetric evaluation of simultaneous multi-energy (2-18 MV) and intensity optimization for IMRT and VMAT
Aliasghar Rohani
et al
2026
Biomed. Phys. Eng. Express
View article
, Dosimetric evaluation of simultaneous multi-energy (2-18 MV) and intensity optimization for IMRT and VMAT
PDF
, Dosimetric evaluation of simultaneous multi-energy (2-18 MV) and intensity optimization for IMRT and VMAT
Objective. This study aimed to develop and evaluate a framework for the simultaneous optimization of beam fluence and multiple photon energies for IMRT and VMAT. Approach. An Elekta Versa HD linear accelerator (linac) was modeled using the BEAMnrc/EGSnrc Monte Carlo platform to simulate 2 MV, 4 MV, and 6 MV photon beams, complemented with measured beam data for 10 MV and 18 MV. A custom MATLAB-based framework was developed to perform simultaneous optimization of energy and fluence, allowing energy weighting of individual beamlets. A total of 12 various energy configurations including single (SE), dual (DE), triple (TE), and quadruple (QE)-energy combinations were generated for thirteen prostate cancer patients. Main results. Target coverage remained comparable across all energy configurations. Compared with 6 SE, multi-energy IMRT and VMAT achieved superior OAR sparing, with the greatest benefits for bladder and rectum in the 5–40 Gy range. TE-IMRT reduced bladder and rectal mean doses by up to 1.6 Gy, with V10 reductions of ~8%; on average, mean doses were reduced by 0.9 Gy (p = 0.001) and 1.0 Gy (p = 0.001). For VMAT, the 6&18 DE-VMAT plan yielded the largest sparing, lowering bladder and rectal mean doses by ~1.0 Gy. Relative to 10 SE, all multi-energy plans achieved equal or lower OAR doses, with TE-IMRT and 6&18 DE-VMAT showing the most consistent benefit, reducing mean doses to the bladder, rectum, right femoral head, and left femoral head by up to 1.26, 0.77, 0.6, and 0.55 Gy, respectively. Significance. This study presents the first assessment of the feasibility and potential clinical value of simultaneous optimization of beam intensity and more than two photon energies for IMRT and VMAT plans, producing optimized multi-energy fluence maps that require subsequent conversion into clinically deliverable treatment plans.
The following article is
Open access
Frequency-induced fatigue in electrically stimulated sheep hindlimb muscles
Berta Mateu-Yus
et al
2026
Biomed. Phys. Eng. Express
View article
, Frequency-induced fatigue in electrically stimulated sheep hindlimb muscles
PDF
, Frequency-induced fatigue in electrically stimulated sheep hindlimb muscles
Functional electrical stimulation (FES) is an effective technique for restoring motor function in patients with paralysis. The early onset of muscle fatigue remains a major drawback, limiting its widespread clinical adoption. It is hypothesized that the high frequencies used in FES may be the primary factor determining muscle fatigue onset. Yet few studies have assessed the dependence of muscle fatigue on stimulation frequency. In particular, there is a need for a systematic evaluation across a continuous range of frequencies. Muscle fatigue dependence on stimulation frequency was assessed in anesthetized sheep, with the aim of modeling human musculature with physiological fidelity in the absence of potentially interfering reflexes. Following surgical muscle exposure, symmetrical 250+250 µs biphasic pulse trains were delivered via hook wire intramuscular monopolar electrodes to either the tibialis cranialis or the extensor digitorum lateralis muscle, and isometric contraction forces were recorded. Eleven frequencies were assayed from 5 Hz to 100 Hz, with rest periods of over 10 minutes between trials. The extracted parameters included peak force, time to peak force, time to fatigue (defined as a 25% force drop), and the slope of force decline at fatigue. Additionally, muscle contraction ripple was assessed. Both muscles exhibited increasing fatigue with frequency, revealing three distinct frequency ranges. Fatigue rate was very slow below 15-20 Hz, gradually increased between ~20-50 Hz, and rised sharply above 50-75 Hz reaching fatigue in a few seconds. Remarkably, fatigue rate only started to increase substantially at 10-20 Hz. Force fusion increased with stimulation frequency, with both muscles showing fused contractions from approximately 12 Hz. The minimal fatigue observed at frequencies corresponding to natural motor unit firing rates suggests that the high frequencies used in FES are a key driver of fatigue.
The following article is
Open access
Statistical modeling of blood and tissue signatures using ultrasonic color flow imaging
Atefeh Abdolmanafi
et al
2026
Biomed. Phys. Eng. Express
12
025073
View article
, Statistical modeling of blood and tissue signatures using ultrasonic color flow imaging
PDF
, Statistical modeling of blood and tissue signatures using ultrasonic color flow imaging
Conventional color flow processing is primarily optimized for qualitative visualization of flow dynamics, limiting its diagnostic use in regions where vascular structures are small relative to the ultrasound beamwidth. Leveraging the statistical properties of color flow data may provide a pathway toward quantitative discrimination between blood and tissue signals. This could enhance detection of vascular abnormalities, improve diagnostic accuracy, and support monitoring in diseases with small hemodynamic changes. Experimental data were obtained using a clinical GE LOGIQ 9 ultrasound system with a 10L linear array probe (3.75 MHz) positioned on an in-house made half-space flow phantom with the focus located at 3 cm depth. The simulation data obtained from Field II used a setup analogous to the experimental settings. Theoretical probability density function of ultrasound color flow power was derived using a gamma distribution. Shape parameters for blood and tissue were estimated using maximum likelihood estimation (MLE) in both simulation and experimental data. Color flow power was found to follow the gamma distribution in both simulation and experimental data. The estimated shape parameters aligned with theoretical predictions and distinguished between blood and tissue. Estimated shape parameters are less than or equal to 1 for tissue samples and greater than 1 for blood samples. This study presents a statistical modeling approach to enhance blood-tissue differentiation in color flow ultrasound, enabling blood characterization and perfusion quantification for improved detection and monitoring of vascular abnormalities.
The following article is
Open access
The impact of microsphere deposition algorithm complexity on microdosimetry following Yttrium-90 radioembolization
E Courtney Henry
et al
2026
Biomed. Phys. Eng. Express
12
025070
View article
, The impact of microsphere deposition algorithm complexity on microdosimetry following Yttrium-90 radioembolization
PDF
, The impact of microsphere deposition algorithm complexity on microdosimetry following Yttrium-90 radioembolization
The microsphere spatial distribution following Yttrium-90 radioembolization (
90
Y-RE) is inherently nonuniform, resulting in substantial microscopic dose heterogeneity not captured by conventional macroscopic dosimetry models. The motivation for this study was to build a robust framework to further understand the relationship between microdosimetry and macrodosimetry-based clinical outcomes. In this study, a stochastic microsphere deposition algorithm sampled histologically-derived cumulative distribution functions (CDFs) governing microsphere cluster diameter (
), distance between clusters (
), and cluster population (
). Six unique models were generated to examine the impact of algorithm complexity on the corresponding absorbed dose distribution, ranging from a completely uniform to fully stochastic reference model. A two-sample statistical Kolmogorov-Smirnov test compared
, and
derived separately from discrete and continuous CDFs. Microdosimetry calculations were performed by convolving a high-resolution dose-voxel kernel with each model. The mean absorbed dose
and various dose-volume metrics (
) were calculated and compared to the reference model to assess the impact of algorithm complexity on dose metric error (
). Published median values of
, and
agreed well with simulated counterparts. There were no statistically significant differences in sampling between discrete and continuous CDFs for
),
), and
). Convolution with the
90
Y dose-voxel kernel resulted a −0.3% deviation compared to a single compartment dose estimate. Model comparisons suggest that sampling
​ is critical for accurately modeling low-dose regions (
), while sampling
is critical for resolving absorbed dose hot spots (
). In contrast, sampling from
​ had minimal impact on model accuracy. The results of this study provide the necessary framework to develop an improved understanding of the relationship between microdosimetry and macrodosimetry-based clinical outcomes following
90
Y-RE.
The following article is
Open access
Differentiating long QT syndrome genotypes using electrocardiographic geometric parameterization and machine learning approaches
Martina Srutova
et al
2026
Biomed. Phys. Eng. Express
12
025067
View article
, Differentiating long QT syndrome genotypes using electrocardiographic geometric parameterization and machine learning approaches
PDF
, Differentiating long QT syndrome genotypes using electrocardiographic geometric parameterization and machine learning approaches
Long QT Syndrome (LQTS) is an inherited cardiac disorder characterized by dysfunctional cardiac ion channels, which result in prolonged QT intervals on electrocardiograms (ECGs). LQTS can lead to severe clinical manifestations, including syncope, ventricular arrhythmias, and sudden cardiac death. Effective genotype-specific management strategies are essential to mitigate the risk of life-threatening arrhythmias. This study aims to achieve automatic discrimination among the LQT1, LQT2, and LQT3 genotypes to enable targeted treatment and prevention strategies. Utilizing ECG data from the Telemetric and Holter ECG Warehouse’s LQTS database, our methodology involves an automated extraction process of short ECG signals, geometric parameterization techniques, and classification using a two-stage cascade of binary support vector machine classifiers. The input features for the classifiers are derived from Lead I ECG signals sampled at 200 Hz, highlighting the potential application in developing single-lead ECG monitoring devices and applications, such as widely used smartwatches, for which short recording duration, low sampling frequency, and arm-to-arm lead measurements are fundamental prerequisites for practical use. The proposed classifier achieved 71% weighted accuracy on out-of-sample data (LQT1: 65% recall, 58% precision; LQT2: 79% recall, 82% precision; and LQT3: 71% recall, 77% precision). Our findings demonstrate the feasibility of noninvasive genotype differentiation for LQTS based on the morphological analysis of ECG signals, providing an advancement in the field of personalized cardiology and the development of portable diagnostic tools.
More Open Access articles
The following article is
Open access
TIGRE: a MATLAB-GPU toolbox for CBCT image reconstruction
Ander Biguri
et al
2016
Biomed. Phys. Eng. Express
055010
View article
, TIGRE: a MATLAB-GPU toolbox for CBCT image reconstruction
PDF
, TIGRE: a MATLAB-GPU toolbox for CBCT image reconstruction
In this article the Tomographic Iterative GPU-based Reconstruction (TIGRE) Toolbox, a MATLAB/CUDA toolbox for fast and accurate 3D x-ray image reconstruction, is presented. One of the key features is the implementation of a wide variety of iterative algorithms as well as FDK, including a range of algorithms in the SART family, the Krylov subspace family and a range of methods using total variation regularization. Additionally, the toolbox has GPU-accelerated projection and back projection using the latest techniques and it has a modular design that facilitates the implementation of new algorithms. We present an overview of the structure and techniques used in the creation of the toolbox, together with two usage examples. The TIGRE Toolbox is released under an open source licence, encouraging people to contribute.
Nano-based drug delivery system for therapeutics: a comprehensive review
Satyendra Prakash 2023
Biomed. Phys. Eng. Express
052002
View article
, Nano-based drug delivery system for therapeutics: a comprehensive review
PDF
, Nano-based drug delivery system for therapeutics: a comprehensive review
Nanomedicine and nano-delivery systems hold unlimited potential in the developing sciences, where nanoscale carriers are employed to efficiently deliver therapeutic drugs at specifically targeted sites in a controlled manner, imparting several advantages concerning improved efficacy and minimizing adverse drug reactions. These nano-delivery systems target-oriented delivery of drugs with precision at several site-specific, with mild toxicity, prolonged circulation time, high solubility, and long retention time in the biological system, which circumvent the problems associated with the conventional delivery approach. Recently, nanocarriers such as dendrimers, liposomes, nanotubes, and nanoparticles have been extensively investigated through structural characteristics, size manipulation, and selective diagnosis through disease imaging molecules, which are very effective and introduce a new paradigm shift in drugs. In this review, the use of nanomedicines in drug delivery has been demonstrated in treating various diseases with significant advances and applications in different fields. In addition, this review discusses the current challenges and future directions for research in these promising fields as well.
The following article is
Open access
From pixels to prognosis: unveiling radiomics models with SHAP and LIME for enhanced interpretability
Sotiris Raptis
et al
2024
Biomed. Phys. Eng. Express
10
035016
View article
, From pixels to prognosis: unveiling radiomics models with SHAP and LIME for enhanced interpretability
PDF
, From pixels to prognosis: unveiling radiomics models with SHAP and LIME for enhanced interpretability
Radiomics-based prediction models have shown promise in predicting Radiation Pneumonitis (RP), a common adverse outcome of chest irradiation. Τhis study looks into more than just RP: it also investigates a bigger shift in the way radiomics-based models work. By integrating multi-modal radiomic data, which includes a wide range of variables collected from medical images including cutting-edge PET/CT imaging, we have developed predictive models that capture the intricate nature of illness progression. Radiomic features were extracted using PyRadiomics, encompassing intensity, texture, and shape measures. The high-dimensional dataset formed the basis for our predictive models, primarily Gradient Boosting Machines (GBM)—XGBoost, LightGBM, and CatBoost. Performance evaluation metrics, including Multi-Modal AUC-ROC, Sensitivity, Specificity, and F1-Score, underscore the superiority of the Deep Neural Network (DNN) model. The DNN achieved a remarkable Multi-Modal AUC-ROC of 0.90, indicating superior discriminatory power. Sensitivity and specificity values of 0.85 and 0.91, respectively, highlight its effectiveness in detecting positive occurrences while accurately identifying negatives. External validation datasets, comprising retrospective patient data and a heterogeneous patient population, validate the robustness and generalizability of our models. The focus of our study is the application of sophisticated model interpretability methods, namely SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-Agnostic Explanations), to improve the clarity and understanding of predictions. These methods allow clinicians to visualize the effects of features and provide localized explanations for every prediction, enhancing the comprehensibility of the model. This strengthens trust and collaboration between computational technologies and medical competence. The integration of data-driven analytics and medical domain expertise represents a significant shift in the profession, advancing us from analyzing pixel-level information to gaining valuable prognostic insights.
The following article is
Open access
A mechanistic investigation of the oxygen fixation hypothesis and oxygen enhancement ratio
David Robert Grimes and Mike Partridge 2015
Biomed. Phys. Eng. Express
045209
View article
, A mechanistic investigation of the oxygen fixation hypothesis and oxygen enhancement ratio
PDF
, A mechanistic investigation of the oxygen fixation hypothesis and oxygen enhancement ratio
The presence of oxygen in tumours has substantial impact on treatment outcome; relative to anoxic regions, well-oxygenated cells respond better to radiotherapy by a factor 2.5–3. This increased radio-response is known as the oxygen enhancement ratio. The oxygen effect is most commonly explained by the oxygen fixation hypothesis, which postulates that radical-induced DNA damage can be permanently ‘fixed’ by molecular oxygen, rendering DNA damage irreparable. While this oxygen effect is important in both existing therapy and for future modalities such a radiation dose-painting, the majority of existing mathematical models for oxygen enhancement are empirical rather than based on the underlying physics and radiochemistry. Here we propose a model of oxygen-enhanced damage from physical first principles, investigating factors that might influence the cell kill. This is fitted to a range of experimental oxygen curves from literature and shown to describe them well, yielding a single robust term for oxygen interaction obtained. The model also reveals a small thermal dependency exists but that this is unlikely to be exploitable.
The following article is
Open access
Machine learning models to predict the relationship between printing parameters and tensile strength of 3D Poly (lactic acid) scaffolds for tissue engineering applications
Duygu Ege
et al
2023
Biomed. Phys. Eng. Express
065014
View article
, Machine learning models to predict the relationship between printing parameters and tensile strength of 3D Poly (lactic acid) scaffolds for tissue engineering applications
PDF
, Machine learning models to predict the relationship between printing parameters and tensile strength of 3D Poly (lactic acid) scaffolds for tissue engineering applications
3D printing is an effective method to prepare 3D scaffolds for tissue engineering applications. However, optimization of printing conditions to obtain suitable mechanical properties for various tissue engineering applications is costly and time consuming. To address this problem, in this study, scikit-learn Python machine learning library was used to apply four machine learning-based approaches which are ordinary least squares (OLS) linear regression, random forest (RF), light gradient Boost (LGBM), extreme gradient boosting (XGB) and artificial neural network models to understand the relationship between 3D printing parameters and tensile strength of poly(lactic acid) (PLA). 68 combinations of process parameters for nozzle temperature, printing speed, layer height and tensile strength were used from investigated research papers. Then, datasets were divided as training (80%) and test (20%). After building the OLS linear regression, RF, LGBM, XGB and artificial neural network models, the correlation heatmap and feature importance of each printing parameter for tensile strength values were determined, respectively. Then, the tensile strength was predicted for real datasets to evaluate the performance of the models. The results demonstrate that XGB model was the most successful in predicting tensile strength among the studied models with an
value of 0.98 and 0.94 for train and test values, respectively. A close
value for the train and test also indicated that there was no overfitting of the data to the model. Finally, SHAP analysis shows significance of each feature on prediction of tensile strength. This study can be extended for independent variables including nozzle pressure, strut size and molecular weight of PLA and dependent variables such as elongation and elastic modulus of PLA which may be a powerful tool to predict the mechanical properties of scaffolds for tissue engineering applications.
CT organ dose calculator size adaptive for pediatric and adult patients
Choonsik Lee
et al
2022
Biomed. Phys. Eng. Express
065020
View article
, CT organ dose calculator size adaptive for pediatric and adult patients
PDF
, CT organ dose calculator size adaptive for pediatric and adult patients
Background
. Although computed tomography (CT) has played a critical role in medical care since its introduction in the 1970s, its potential long-term risk of adverse health effects has been of concern. It is crucial to accurately estimate the radiation dose delivered to the patient’s critical organs to ensure the dose is As Low As Reasonably Achievable. However, organ-level dose calculation tools for pediatric and adult patients with various body sizes are rare. We extended the existing CT organ dose calculator, NCICT 1.0, which is based on reference-size phantoms, to include body size-specific pediatric and adult phantoms.
Methods
. We calculated body size-specific organ doses normalized to CTDI
vol
by using a library of 158 pediatric and 193 adult computational human phantoms with various body sizes combined with a Monte Carlo radiation transport code, MCNP6. We also created a library of generic tube current modulation (TCM) profiles for the phantom library using a ray-tracing algorithm and implemented them into organ dose calculations. We validated organ doses for the body size-specific phantoms using those calculated from ten abdominal CT patients. We also evaluated potential dosimetric errors caused by only using reference phantoms for patients with different body sizes.
Results
. Organ dose coefficients and TCM profiles for 351 pediatric and adult body size-specific phantoms were implemented into NCICT 2.0. The dose coefficients from the ten abdominal CT patients agreed with those from the program within 13%. The organ doses for the overweight phantoms were overestimated by over 80% when only reference size phantoms were used.
Conclusion
. We confirmed that the upgraded dose calculator NCICT 2.0 could substantially reduce potential dosimetric errors caused by using only reference size phantoms. The program should be useful for the radiology community to accurately monitor organ doses for pediatric and adult CT patients with various body sizes.
Biocompatibility evaluation for the developed hydrogel wound dressing – ISO-10993-11 standards –
in vitro
and
in vivo
study
A V Thanusha and Veena Koul 2022
Biomed. Phys. Eng. Express
015010
View article
, Biocompatibility evaluation for the developed hydrogel wound dressing – ISO-10993-11 standards – in vitro and in vivo study
PDF
, Biocompatibility evaluation for the developed hydrogel wound dressing – ISO-10993-11 standards – in vitro and in vivo study
Assessment of biocompatibility for the developed wound dressing plays a significant role in translational studies. In the present research work, a wound dressing has been developed using gelatin, hyaluronic acid and chondroitin sulfate using EDC as crosslinker in a specific manner. The characterized hydrogel wound dressing was evaluated for its biocompatibility studies by means of ISO-10993-11 medical device rules and standards. Various parameters like skin sensitization test, acute systemic toxic test, implantation study, intracutaneous reactivity test,
in vitro
cytotoxicity test and bacterial reverse mutation test, were evaluated and the results demonstrated its safety for the pre-clinical investigation.
Fabrication and characterization of a starch-based nanocomposite scaffold with highly porous and gradient structure for bone tissue engineering
Fereshtehsadat Mirab
et al
2018
Biomed. Phys. Eng. Express
055021
View article
, Fabrication and characterization of a starch-based nanocomposite scaffold with highly porous and gradient structure for bone tissue engineering
PDF
, Fabrication and characterization of a starch-based nanocomposite scaffold with highly porous and gradient structure for bone tissue engineering
Starch based scaffolds are considered as promising biomaterials for bone tissue engineering. In this study, a highly porous starch/polyvinyl alcohol (PVA) based nanocomposite scaffold with a gradient pore structure was made by incorporating different bio-additives, including citric acid, cellulose nanofibers, and hydroxyapatite (HA) nanoparticles. The scaffold was prepared by employing unidirectional and cryogenic freeze-casting and subsequently freeze-drying methods. Fourier transform infrared (FTIR) spectroscopy confirmed the cross-linking of starch and PVA molecules through multiple esterification phenomenon in the presence of citric acid as a cross-linking agent. Field emission scanning electron microscopy (FE-SEM) observations showed formation of aligned lamellar pores with a gradient pore width in the range of 80 to 292
m, which well meets the pore size requirement for bone regeneration, and also well dispersion of cellulose and HA nanofillers within the scaffold matrix. Based on the mechanical testing results, the cellulose-HA reinforced scaffold possesses sufficient compressive modulus and yield strength for non-load bearing applications in the dry state; and also it presents fast responsive shape recovery in wet state. According to
in-vitro
assessments, apatite phase mineralization was extensively induced in the presence of HA nanoparticles as heterogeneous nucleating sites. Also, it was revealed that cellulose and HA nanofillers decelerate and accelerate the scaffold biodegradation rate, respectively. MTT assay proved good cytocompatibility of the nanocomposite scaffold with osteoblast cells. Finally, it was shown that the introduced scaffold provides a suitable platform for the cells adhesion.
A few-shot U-Net deep learning model for lung cancer lesion segmentation via PET/CT imaging
Nicholas E Protonotarios
et al
2022
Biomed. Phys. Eng. Express
025019
View article
, A few-shot U-Net deep learning model for lung cancer lesion segmentation via PET/CT imaging
PDF
, A few-shot U-Net deep learning model for lung cancer lesion segmentation via PET/CT imaging
Over the past few years, positron emission tomography/computed tomography (PET/CT) imaging for computer-aided diagnosis has received increasing attention. Supervised deep learning architectures are usually employed for the detection of abnormalities, with anatomical localization, especially in the case of CT scans. However, the main limitations of the supervised learning paradigm include (i) large amounts of data required for model training, and (ii) the assumption of fixed network weights upon training completion, implying that the performance of the model cannot be further improved after training. In order to overcome these limitations, we apply a few-shot learning (FSL) scheme. Contrary to traditional deep learning practices, in FSL the model is provided with less data during training. The model then utilizes end-user feedback after training to constantly improve its performance. We integrate FSL in a U-Net architecture for lung cancer lesion segmentation on PET/CT scans, allowing for dynamic model weight fine-tuning and resulting in an online supervised learning scheme. Constant online readjustments of the model weights according to the users’ feedback, increase the detection and classification accuracy, especially in cases where low detection performance is encountered. Our proposed method is validated on the Lung-PET-CT-DX TCIA database. PET/CT scans from 87 patients were included in the dataset and were acquired 60 minutes after intravenous
18
F-FDG injection. Experimental results indicate the superiority of our approach compared to other state-of-the-art methods.
Cold plasma irradiation inhibits skin cancer via ferroptosis
Tao Sun
et al
2024
Biomed. Phys. Eng. Express
10
065036
View article
, Cold plasma irradiation inhibits skin cancer via ferroptosis
PDF
, Cold plasma irradiation inhibits skin cancer via ferroptosis
Cold atmospheric plasma (CAP) has been extensively utilized in medical treatment, particularly in cancer therapy. However, the underlying mechanism of CAP in skin cancer treatment remains elusive. In this study, we established a skin cancer model using CAP treatment
in vitro
. Also, we established the Xenograft experiment model
in vivo
. The results demonstrated that treatment with CAP induced ferroptosis, resulting in a significant reduction in the viability, migration, and invasive capacities of A431 squamous cell carcinoma, a type of skin cancer. Mechanistically, the significant production of reactive oxygen species (ROS) by CAP induces DNA damage, which then activates Ataxia-telangiectasia mutated (ATM) and p53 through acetylation, while simultaneously suppressing the expression of Solute Carrier Family 7 Member 11 (SLC7A11). Consequently, this cascade led to the down-regulation of intracellular Glutathione peroxidase 4 (GPX4), ultimately resulting in ferroptosis. CAP exhibits a favorable impact on skin cancer treatment, suggesting its potential medical application in skin cancer therapy.
Journal links
Submit an article
About the journal
Editorial Board
Author guidelines
Review for this journal
Publication charges
Awards
Journal collections
Pricing and ordering
Journal information
2015-present
Biomedical Physics & Engineering Express
doi: 10.1088/issn.2057-1976
Online ISSN: 2057-1976