Information and interesting ideas

Topics posted here will be in the realm of bioanalysis and biomarkers as part of new therapeutic development, with the occasional post of scientific topics that I find interesting.

Monday, February 27, 2023

Publication: A Novel Hybridization LC-MS/MS Methodology for Quantification of siRNA in Plasma, CSF and Tissue Samples

Long Yuan, a former colleague of mine, and a few current colleagues recently published a method for double-stranded siRNA using hybridization extraction and LC-MS/MS. Long reports the first hybridization method for double stranded siRNA using conventional DNA capture probes, as well as, a peptide nucleic acid (PNA) capture probe. The PNA has a higher affinity for the siRNA and therefore, has greater recovery efficiency (and was proven experimentally). To optimize the LC-MS/MS conditions, they used the active antisense siRNA strand.  However for the calibration curve and QCs the double-stranded siSNA was used. A typical BMV validation was used. The calibration curve was 2-1000 ng/ML and the s/n at the LLOQ appeared to be >10. They tested monkey plasma, CSF and 8 different tissues. Using the PNA probe, they achieved around 90% recovery - demonstrating the viability of the approach for this and other double-stranded siRNA therapies. More details are in the article.

Thursday, February 23, 2023

Publication: Applied Clinical Tandem Mass Spectrometry-Based Quantification Methods for Lipid-Derived Biomarkers, Steroids and Cannabinoids: Fit-for-Purpose Validation Methods

Matias, I., et. al, recently published an article in Biomolecules on a topic that I've been interested in and discussing for a few years; the use of LC-MS/MS in clinical labs for biomarkers, therapeutic drug monitoring and canabinoids.  Not satisfied with just LC-MS/MS, they included GC-MS/MS in their work to provide a complete set of tools for their clinical studies using technologies that are seeing growth in clinical laboratories.

 

Interestingly, they applied the concepts from bioanalytical guidance (FDA, EMA, ICH M10) in assessing the quality of the assays: linearity, selectivity, determination of lower limit of quantification (LLOQ), matrix effect, carry-over, and within- and between -run determination of accuracy and precision.  All of the stability parameters (during the preparation of samples in the biological matrix, post-preparative auto-sampler stability,  solvent stability at room temperature (RT) and -20C for refence standards, and freeze-thaw stability and frozen storage stability in matrix. 

 

They also compared the performance of the calibration curves when prepared in methanol, a surrogate matrix or the biological matrix. Unsurprisingly, the methanol calibrators performed optimally, with mixed performance for the calibrator curves for the 11 steroids and 7 canabinoids in the surrogate matrix and biological matrix. Methanol homogenized plasma as applied to C18 SPE for the steroid extraction; while canabinoids were extracted from plasma mixed with chloroform & methanol using LLE. The steroid were freed from extracted lipids prior to derivatization for GC-MS/MS.  I mention this extraction procedures and the performance of the calibrators in methanol as they may not have been optimized for the chemical breadth of analytes being extracted.  This is also noted in the matrix effect measurements for many of the analytes at low and mid QC concentrations.

 

They noted that "methods meet the stated criteria for recurrent measurement of STs and CBs in human plasma and saliva, as well as in plasma and tissue samples (small brain areas, tissues, and organs) from animal models". Which for biomarkers are a key accomplishment - the ability to provide reliable data at the quality needed to support the study endpoints.  I don't currently have access to the CLSI guide for LC-MS/MS assay validation, but would love to compare those recommendations to what this group of researchers performed for a clinical laboratory.

 

I've not delved into further details and recommend you explore them for yourself in the article.

 

Biomolecules 202313(2), 383;  https://doi.org/10.3390/biom13020383

Wednesday, February 22, 2023

Publication: Small molecule biomarker discovery: Proposed workflow for LC-MS-based clinical research projects

I met Tim Garret at CPSA’s in-person conferences several times in the pre-covid era. Tim and his graduate students have always been working at the cutting edge of LC-MS.  His latest article, “Small molecule biomarker discovery: Proposed workflow for LC-MS-based clinical research projects” is a collaboration with several global researchers.  With the need for more biomarkers that can identify patients with different diseases and monitor the impact of new therapies, robust practices are needed. The article discusses 5 integrated phases that result in robust, reliable data and new biomarker identification.  It starts with a sound quality control process that oversees and is built into the other phases

1) careful planning and implementation of the clinical trial (including the sample collection, processing and storage)

2) planning the LC-MS analyses with proper controls to ensure integrity, followed by sample extraction and then targeted, semi-targeted and non-targeted LC-MS

3) data integration and processing to ensure that observed peaks can reliably be identified and quantified (or relative quatification in the case of non-targeted analysis)

4) data analysis and interpretation into the realm of the clinical trial and disease.

 

Many of the activities in phase 3 and 4 are now being supported by a growing range of machine learning (ML) algorithms and artificial intelligence (AI) applications. A surprising number of which are referenced in the article. Surprising from their number, not the inclusion in the article.

 

The article does not delve deeply into the intricate art of target or non-targeted LC-MS, but focuses more on the overall process and as such, serves as a useful tool to anyone working in new therapy development and needs to understand how to successfully integrate biomarker discovery into their clinical trials.

Webinar: Transcelerate is offering a webinar on implementing ICH E8 (R1) guidance

TransCelerate is an industry consortium aimed at improving drug development.  They have developed a number of tools that improve the conduct of clinical trials.  They will be holding a webinar "What You Need to Know about ICH E8 (R1): Using TransCelerate’s Tools to Help Interpret and Implement ICH E8 (R1)” on 2March2023 at 10AM EST. It will be a 1 hour discussion on TransCelerate’s tools and resources for ICH E8 (R1) and its implementation. These solutions target key new concepts in ICH E8 such as Critical to Quality Factors, Stakeholder Engagement, Critical Thinking, and Open Dialogue. The webinar will focus on:

  • What’s new in ICH E8 (R1)
  • Focus of our work and how our focus topics were chosen
  • Overview of the published tools
  • Q&A

To register for this free webinar go to this link.

 

Background on ICH E8 (R1) and Transcelerate.

From the ICH website:

The ICH E8(R1) Guideline on General considerations for Clinical Studies reached Step 4 of the ICH Process on 6 October 2021.

Clinical studies of medicinal products are conducted to provide information that can ultimately improve access to safe and effective products with meaningful impact on patients, while protecting those participating in the studies. ICH E8(R1) provides guidance on the clinical development lifecycle, including designing quality into clinical studies, considering the broad range of clinical study designs and data sources used.

This modernisation of ICH E8 is the first step towards the Renovation of Good Clinical Practice initiated in 2017. The revision incorporates the most current concepts achieving fit-for-purpose data quality as one of the essential considerations for all clinical trials. 

From TransCelerate's website:

TransCelerate BioPharma’s mission is to collaborate across the global biopharmaceutical research and development community to identify, prioritize, design, and facilitate the implementation of solutions designed to drive the efficient, effective and high-quality delivery of new medicines.

 

Monday, February 20, 2023

Publication: A Novel Neutralization Antibody Assay Method to Overcome Drug Interference with Better Compatibility with Acid‑Sensitive Neutralizing Antibodies

A group at Merck recently published the above titled article in the AAPS Journal (2023) 25:18
https://doi.org/10.1208/s12248-023-00783-9 (link). They propose a new approach for the isolation and measurement of neutralizing antibodies that minimizes the harsher acid treatments of BEAD assays.  It appears to build on the PandA assay concept. The initial step is the addition of excess drug to form NAb-drug complexes that are precipitated out of the sample for further processing. After a lactic acid dissociation, biotin-bound-drug is added to be in excess of the expected NAb concentration.  The biotin-bound-drug then complexes with the NAb and is captured on the MSD plate along with the non-complexed biotin-bound-drug.  Using a rhuthenium-tagged drug target, the non-complexed biotin-bound-drug is measured. That measure is the inverse of the complexed (non-complexed biotin-bound-drug-Nab) concentration. 

It is interesting in that it may detect acid-labile NAb that may have been destroyed by the use of a BEAD process. The reduction in the use of biotin-bound-drug is also an advantage.

I look forward to seeing how other labs test this approach and provide further insights on its broad applicability.   What do you think about the proposed assay approach?

Friday, February 17, 2023

Listening to the MCERSI & FDA webinar: Application of Artificial Intelligence and Machine Learning for Precision Medicine

A few notes from the morning speakers:

Keynote was a fireside chat with Dr. Eric Topol:

By 2019, most applications of AI were retrospective, afterwards, studies are being designed with the inclusion of AI. The current challenge comes from wearables and continuous monitoring devices - these generate huge amounts of data that managing is a problem, on top of teaching the AI how to interpret what could be minor changes that accumulate over time. 

Bias can come from AI assessments when the training data is not inclusive from a racial, socio-economic, and life-style standpoint. The Optum report includes examples where an historic publication or study establishes standards for a disease that are related to a specific population and not applicable to other specialty populations with similar symptoms/disease.

Cell phones are now being used to capture ultrasound imaging for interpretation by local or virtually connected physicians and other health care workers. 

Data security privacy computing are a focus currently a number of approaches are in development (e.g., federate and swarm learning).

Emphasized the use of AI to permit more time for the physician to spend with patients to gather more data; allowing the AI to have a richer dataset for assessment and recommendations. 

Dr. Alan Edelman (MIT) talk on "Intro to AI & ML"

Talked about 5 areas for AI in healthcare: Scientific AI to look for trends in data to generate new discoveries, Natural Language Processing (patient chatbots, unifying patient records, redacting confidential info), Precision medicine (custom treatments based on individual's characteristics), Computer Vision - detect tumor and lesions, and Physician Guidance (during surgery, recommendations of course of therapy).

Felt that we are not even close to the possibilities of AI & ML">

Dr. James Lu  "AI-partnered Dynamic Model Discovery for Precision Medicine"

Wants to have pharmacology and pharmacodynamic models to develop casual models that go on through additional learning, to improve comprehensive disease models that use patient specific data to guide therapy.

Dr. Nadia Terranova (Merck Kga) "Enabling AI learning to support Precision Medicine"

Totality of information (high dimension data) for a patient can be compared to multiple pharmacology models to guide therapy selection and application.  Notes inter-tumor heterogeneity for a patient, as well as across patients.  Showed data from a study of 6369 individual lesions that applied these concepts.

Next speaker Dr. Ksorbo

Deep PumasAI -  bridges the machine learning model which require large data sets with scientific models of known results (e.g., Pk on drug, known biomarkers of disease). Conceptually, this reduces the learning curve while enabling the ability to query the situation for a single patient for the best therapy and dosing model, as well as quality of life and related issues.  

Rahul Goyal "ScreenMCM: A machine learning-based product screening tool to accelerate medical countermeasure development"

Due to the nature of the disease or condition (e.g., radiation exposure, toxin exposure) Uses computer models to reduce the number of animals in testing while testing more therapies.  Created a model from 3 animal studies that produces 1500 rules against which new compounds can be tested. 60% of study data was used for training, 40% used to verify if the model was accurately predicting outcomes. Accuracy was noted at >70%.  Application to additional discovery therapies allowed selection of products using 10 day study data with fewer animals.

Dr. Hao Zhu (FDA) "Application of AI and ML in Drug Developemtn and Precision Medicine"

AI can handle large number of variables vs Traditional Pharmacometric models which typically use fewer variable. However Pharmacometric models can provide good statistical understanding, AI does not provide robust statistics.

 


Thursday, February 16, 2023

Article: Lessons from CDER’s Quality Management Maturity Pilot Programs

For my CMC and QA colleagues, the open access article published in The AAPS Journal reviews the activity and findings of the FDA CDER division's pilot program that evaluated companies' quality systems maturity (i.e., compliance with Current Good Manufacturing Practices (CGMP)). As noted in the article, over an 18month period, contractors were used to assess companies against a standard assessment tool to determine:

(i) the level of integration of the quality sys- tem and quality objectives with business and manufacturing operations at an establishment,

(ii) the agility of an estab- lishment in responding to unexpected changes (e.g., supply chain disruptions, demand surges, deviations, natural disas- ters), and

(iii) the resilience of an establishment’s business and production processes.

The outcomes are an interesting read (e.g., the time limits for people to respond were not commensurate with the required details, some questions are best answered by management, while others should be answered by those performing the activities.

 

Based on the success of the pilot, the assessment tool and process are being refined and will continue to be used.

News article: Euformatics, ViennaLab, Oncompass Win European Liquid Biopsy Development Contract

This was reported at 360Dx and is interesting as it intends to use patient's NGS results to develop the data base and then characterize the patient's cancer against known profiles for optimized treatment.

Finnish NGS data firm Euformatics and partners ViennaLab Diagnostics and Oncompass Medicine have been awarded a Phase 1 contract by a group of leading European hospitals to research and develop an economically sustainable liquid biopsy platform for molecular profiling in cancer patients, Euformatics said on Monday.

The EU-funded five-year contract is being issued under the OncNGS pre-commercial procurement project, in which eight medical centers in Belgium, France, Germany, Italy, and Spain are acting as collective buyers. The effort aims to fill an unmet need for "minimally invasive, scalable, and cost-efficient solutions in Europe for screening and diagnosing cancer," the partners said in a statement.

The group's project is one of four collectively funded by the OncNGS program's €7 million. The other three contractors are Belgian university KU Leuven, Agilent Technologies Belgium, and OncoDNA.

Euformatics, ViennaLab, and Oncompass decided to work together to combine their respective existing NGS technology, IP, and expertise in variant interpretation and NGS data quality control.

"Diagnosing cancer accurately using NGS data from a liquid biopsy sample is not a trivial task. Offering an end-to-end solution from library preparation until therapy selection is even harder," Euformatics CEO Tommi Kaasalainen said in a statement.

Phase 1 of the effort begins this month, with the first delivery milestone due in the second quarter of this year. The procurement consortium will then evaluate the achievements of the four contracted suppliers and invite those who are successful to submit an offer for Phase 2, which involves further prototype development and validation.

Tuesday, February 14, 2023

Publication: Japan Bioanalysis Forum's considerations and recommendations for pPCR and RT-qPCR assays

The Japan Bioanalysis Forum recently published in the journal Bioanalysis an open access paper "Understanding quantitative polymerase chain reaction bioanalysis issues before validation planning: Japan Bioanalysis Forum discussion group".  This is both a review article and best practices position paper. As such, it provides both a broad perspective and deep assessment of the state of the art practice.  A few things I found interesting follow.

 

Coming from a chromatography background and their typical accuracy and precision, moving into immunoassays and recently being involved in qPCR assays, I've found it quite interesting how the accuracy and precision number are deemed acceptable based on the capability of the technology and the use of the data for pharmacokinetics.  Here, the authors talk about variability in the Cq values:

Based on the authors’ experiences, the acceptable variation in qPCR Cq values is +/- 0.25 or less with technical issues and +/- 0.5 or less with biological variations, which makes a total acceptable variation of +/-0.75 or less. Hence, variation at the time of measurement is often deemed insignificant if the variation in Cq values is Ã…+/-0.75 or less. This corresponds to a variation of -41 to +68% when Cq values are converted to copy numbers.

 

Calibration curves preparation is also considered and the use of a well-characterized surrogate matrix is proposed "when the matrix of a sample is diverse (e.g., tissue types, animal species or strain differences), scarce and/or limited in quantity, it is impractical to plot a calibration curve for each type of matrix."  This highlights a key difference between accepted practice for immunoassays and LC-MS assays under the BMV and qPCR assays - specifically, that after extraction it is acceptable to put more than a single tissue or fluid into a qPCR plate/test as the amplification process is only looking for the target nucleic acid sequence. 

 

The authors also review a few calculation models for relative quantification. Techniques I was unfamiliar with.

 

The article also touches on the key points related to achieving success

  1. preventing contamination with facilities,  clothing, equipment, consumables and reagents
  2. Training staff for consistent practices and preventing contamination

 

There are many other topics considered with best practices discussed, as well as a long list (61) of references should the reader want to review a topic in greater detail. Overall, an article well worth the time to read.

Monday, February 13, 2023

Article: Consideration for the validation of clinical laboratory methods in nonclinical fields

This article published by Minomo, et. al., describes an approach for validating clinical laboratory tests for animal studies.  It comes from a large group of companies and institutes, and looks at an area essential for safety testing of new therapies in GLP safety studies. An area that likely has a rich publication history, but one I’ve not had much exposure to. I’m more familiar with the majority of the regulations and guidelines (e.g., CLSI documents) address human clinical laboratories.

 

Here, the authors note the absence of guidelines for laboratories testing the biomarkers and clinical chemistry of animal samples.  I was going to write the clinical lab testing of non-clinical samples, but the lack of clarity on the use of “clinical” could confuse the issue. The article considers a number of guidelines from human drug research and clinical lab assay validation to define recommendations for assay validations.  These recommendations cover the expected key parameters and looks at whether they are from an in vitro diagnostic kit vs. tests using investigational reagents vs. a lab developed test:

1) Specificity, selectivity;

2) Accuracy, trueness;

3) Precision;

(1) Repeatability;

(2) Intermediate precision;

4) LOD, LOQ;

5) Linearity, parallelism;

6) Range;

7) Robustness;

8) Stability;

9) Traceability and uncertainty.

 

If you are working in this space or using data from such tests, it is a worthwhile read.

 

Bioanalysis (2022) 14(21), 1337–1348  DOI: 10.4155/bio-2022-0178

Friday, February 10, 2023

FDA Hosting Public Workshop: Understanding Priorities for the Development of Digital Health Technologies to Support Clinical Trials for Drug Development and Review

Driven by the pandemic, one of the most rapid evolutions in clinical trails in the past few years has been the shift to decentralized trails and supporting them the implementation of digital technologies.  To address the shifting paradigm, the FDA and Duke University are hosting a Public Workshop on the 'Development of Digital Health Technologies'.  I've attended several other workshops by this collaboration and they have always been thought provoking and good opportunities to ask questions and hear questions and answers from others in the field.

From the announcement:

Summary                                 REGISTRATION

FDA and the Duke-Robert J. Margolis, MD Center for Health Policy will host the virtual public workshop “Understanding Priorities for the Development of Digital Health Technologies to Support Clinical Trials for Drug Development and Review” on March 28, 2023, 1:00 PM – 4:15 PM ET and March 29, 2023, 1:00 PM – 4:45 PM ET. The purpose of the public workshop is to understand the priorities for the development of Digital Health Technologies (DHTs) to support clinical drug trials, including accessibility, diversity, and clinical outcomes measures using DHTs.   

FDA plans to discuss priorities and challenges for the development of DHTs to support clinical drug trials including:  

  • improving participant access, increasing diversity, and facilitating engagement through remote trial-related measurements; 
  • understanding patient and industry perspectives;   
  • understanding opportunities for remote data acquisition directly from trial participants; and 
  • using DHTs to capture clinical outcomes measures.

Date:
March 28 - 29, 2023
Day1:
 -  ET 
Day2:
 -  ET

Thursday, February 9, 2023

FDA post: Artificial Intelligence/Machine Learning Assisted Image Analysis for Characterizing Biotherapeutics

The FDA made a post that presents some collaborative research to emphasize the value of the techniques employed.  Using AI and machine learning and convolutional neural networks (CNN) The FDA looked at biotherapeutic products with Univ. of Colorado and NIST to look at images of the products for subvisible particles using flow imaging microscopy. These particles are typically generated through stress of the product and are of concern as they may be protein aggregates with the potential to cause immunogenicity.

Flow imaging microscopy can generate 100's of images that are cumbersome to process manually. The researchers used unstressed and stressed lots of biotherapeutics to teach the AI with the intent of having the AI be able to determine if a subvisible particle characteristics can be monitored to show the extent of stress the sample has undergone. Ultimately, by using different stressors, the system will be able to identify those that have the greatest potential to form aggregates and allow formulators to introduce changes that minimize aggregate formation.

The process employed with the CNN processing of the image is described in "Figure 2. Basic CNN workflow" in the posting:

A CNN is used as an image “classifier”, i.e., the network is intended to process an image of a single particle and predict if that particle comes from one of the two classes: “Stressed” or “Non-Stressed”. (Note that for stressed condition, the model protein solution is kept under shaking for 7 days at ambient temperature, non-stressed protein solution is kept at ambient temperature without shaking stress). To train (i.e., estimate the most discriminatory parameters) this classifier, a large collection of images properly labeled as stressed or unstressed was used. The first step is pre-processing of these FIM images (resizing, normalization, segmentation, etc.) to generate image batches for efficient processing. Then the CNN sequentially passes the batches of images through several “convolutional layers.” Within each convolutional layer, a “filter” (which is itself a small 2D image) is convolvedi with the input image. The parameters of the filters are determined by optimizing a measure that is specific to the task at hand (e.g., a binary cross-entropy lossii in the image classification task shown here). Once all model parameters are estimated (or “learned”), the CNN can process new images in a feed forward fashion. That is, in each convolutional layer, a new set of filters (whose parameters were determined in the “learning phase”) are convolved with the input images from the previous layers producing new “activation images” which serve as input images for the next layer (usually with a smaller size and increased number of channels compared to the images of the previous layer). After passing through all the convolution filter layers, the resulting activation images are typically passed to a fully connected artificial neural network to extract the final “data-driven” features.

More details are in the post and the reference publications.

Wednesday, February 8, 2023

FDA Webinar: Office of Tissue and Advanced Therapeutics (OTAT) Town Hall Q&A session

Held on 7Feb2023, the OTAT Town Hall was interesting in that it was only Q&A on Gene Therapy with no presentations.  It started with 15 questions that had been submitted prior to the webinar and then followed with questions from the audience.  Note that the following is based on my interpretation of the responses to the questions by the FDA speakers and is not the FDA's position.

 

Several questions were about choosing the study population in early trials vs later trials. The FDA's response for adults and pediatric studies was pretty consistent - it is benefit-risk based optimization and that must include the patient’s ability to provide consent. This tended to mean patients with more advanced diseases (including those who cannot take any existing therapies due to side effects or are refractive to those therapies). Patients should, however, be chosen as those without other co-morbidities that would reduce the patient’s ability to handle side effects. In pediatric trials, this also means starting with older children. For pediatric trails, the concept of "prospect of direct benefit", where the risks can be mitigated to some extent through animal testing and preferably adult testing - i.e., ensures that the risk is minimized in favor of the benefit (see 21 CFR 50 subpart D)

 

Other questions explored the appropriate control arms of studies and the FDA noted 5 different approaches to control arms with the least favored being open trials followed by externally controlled trials.  The concept of externally controlled trials, which were the focus of a recently released draft guidance, was felt best applied to well-characterized diseases but in general were not favored since they could not include blinding or randomization.  Both of which were noted as important for rare diseases. Additionally, externally controlled trials may not be adequately controlled for a variety of factors (e.g., disease status, progression, supervision of the external control arm of study).

 

On the question of "how can biomarker endpoints be best used in gene therapy clinical trials?"  Efficacy is typically measured based on minimizing morbidity and reduction in the clinical signs of the disease. Biomarkers may help characterize the status of disease.  Start a natural history study early to identify the most appropriate biomarker(s).  Use of Biomarkers relies on validated biomarkers or when the biomarker shows relative change that aligns with a change in disease status (i.e., an intermediate gene therapy endpoint is predictive of population efficacy endpoint).  In this case, the FDA would still like to see reduction in morbidity, but the biomarker may help accelerate the approval.

 

Overall, a good opportunity to hear and understand the FDA thinking on implementing gene therapy clinical trials in adults and children.

Tuesday, February 7, 2023

FDA Webinar: Overview: Clinical Pharmacology Considerations for Neonatal Studies

 February 15, 2023

1:00 PM - 2:00 PM ET

From the FDA notification with registration information:

ABOUT THIS WEBINAR

In this webinar, FDA will discuss:

  • An overview of the current status and the gaps related to the inclusion of neonates in drug development
  • Clinical pharmacology considerations for planned studies in neonates
  • General pharmacokinetic, pharmacodynamic, and pharmacogenomic considerations for clinical pharmacology studies in neonates
  • Unique clinical and study design considerations for studying neonates
  • Innovative approaches that can be incorporated into study design to address unique challenges in neonates

TOPICS COVERED

  • Defining neonatal subpopulations that can be used for study design and study results reporting
  • Clinical pharmacology and study design considerations for neonatal studies
  • Innovative approaches to study design and analysis to address unique challenges in performing clinical trials in neonates

    FDA SPEAKERS

    A Pediatric Research Imperative: Addressing Neonates in Drug Development 

    Dionna Green Director Office Pediatric Therapeutics (OPT) | Office of the Commissioner (OC) | FDA 

    Clinical Considerations for Neonatal Drug Development 

    An Massaro Supervisory Medical OfficerOPT | OC | FDA 

    Clinical Pharmacology of Neonates and Considerations for Study Design 

    Elimika Pfuma Fletcher Policy Lead and Senior Clinical Pharmacologist Office of Clinical Pharmacology (OCP) | Office of Translational Sciences (OTS) | CDER | FDA

    Monday, February 6, 2023

    What is going on with PhRMA - AbbVie and Teva leave the organization

     Reported by FiercePharma and based on a report by Stat News (behind paywall).

    Taking a cue from AbbVie’s playbook, Teva is the latest drugmaker to walk away from the Pharmaceutical Research and Manufacturers of America (PhRMA).

    “Teva has decided not to renew its membership with PhRMA in 2023,” Brian Newell, spokesperson for the influential lobbying group, said over email. Teva also confirmed its departure in a statement provided to Fierce Pharma.

    As with AbbVie in December, Teva’s rationale wasn’t immediately clear. Teva is the leading producer of generic drugs worldwide, but the company also markets branded medicines such as migraine prevention med Ajovy and Austedo for tardive dyskinesia.

    “We annually review effectiveness and value of engagements, consultants and memberships to ensure our investments are properly seated,” a Teva spokesperson explained over email. “We continue to remain engaged—in DC and around the world—on the issues important to our company and to the millions of patients who rely on our products.”

    I recall there was a lot of discussion a while back when PhRMA decided to be focused on industry advocacy and be less involved in science-based regulatory guidance interactions.  The article provides a bit on AbbVie and Teva's rational for leaving the organization and the focus for PhRMA's activities from this year's Chair.

    Have you heard anything on other company's activity with PhRMA?


    Biomarker Article Reading for the Week

    Analytical and Bioanalytical Chemistry. Volume 415, Issue 6 is now available online and brings together a number of articles on biomarkers using different technologies. Many are focused on solutions supporting personalized medicine. I’ve downloaded a number of them (titles and authors below) for reading this week.

     

    Electrochemical biosensors — driving personalized medicine

    Maria Jesús Lobo-Castañón, Susana Campuzano

     

    Detection of COVID-19-related biomarkers by electrochemical biosensors and potential for diagnosis, prognosis, and prediction of the course of the disease in the context of personalized medicine

    Viviana Vásquez, Jahir Orozco

     

    Point-of-care electrochemical testing of biomarkers involved in inflammatory and inflammatory-associated medical conditions

    Diana-Gabriela Macovei, Maria-Bianca Irimes, Oana Hosu, Cecilia Cristea, Mihaela Tertis

     

    Electrochemical biosensors for analysis of DNA point mutations in cancer research

    Katerina Ondraskova, Ravery Sebuyoya, Ludmila Moranova, Jitka Holcakova, Petr Vonka, Roman Hrstka, Martin Bartosik

     

    Practical tips and new trends in electrochemical biosensing of cancer-related extracellular vesicles

    Patrick Severin Sfragano, Serena Pillozzi, Gerolama Condorelli, Ilaria Palchetti

     

    An ultrasensitive and disposable electrochemical aptasensor for prostate-specific antigen (PSA) detection in real serum samples

    Canan Özyurt, İnci Uludağ, Mustafa Kemal Sezgintürk

     

    Accelerating the development of implantable neurochemical biosensors by using existing clinically applied depth electrodes

    Alexander R. Macdonald, Francessca Charlton, Damion K. Corrigan


    FDA Webinar: A Deep Dive: FDA Draft Guidance on Statistical Approaches to Establishing Bioequivalence

     

    Date:  March 14, 2023
    Time:  10:00 AM - 12:00 PM ET
    ABOUT THIS WEBINAR

    In December 2022, FDA issued a draft guidance for industry entitled Statistical Approaches to Establishing Bioequivalence, which provides recommendations to sponsors and applicants who intend to use equivalence criteria in analyzing in vivo or in vitro bioequivalence (BE) studies for investigational new drugs (INDs), new drug applications (NDAs), abbreviated new drug applications (ANDAs), and supplements to these applications. This guidance discusses statistical approaches for BE comparisons and focuses on how to use these approaches generally and in specific situations. When finalized, this guidance will replace the February 2001 FDA guidance for industry “Statistical Approaches to Establishing Bioequivalence” and will represent FDA’s current thinking on this topic.

    This webinar will take a deeper look into the draft guidance for new and revised content and provide clarification to comments received through the public docket.

     Further details and agenda are available at this link

    Wednesday, February 1, 2023

    FDA Webinar: Application of Artificial Intelligence & Machine Learning for Precision Medicine

    In collaboration with the University of Maryland's CERSI group, the FDA will be holding a one-day workshop “Application of Artificial Intelligence and Machine Learning for Precision Medicine” on Friday, February 17, 2023.  It is a virtual meeting and details with registration information are available through this webpage.

    The noted meeting objectives are:

    1. Review progress made from implementing artificial intelligence and machine learning in drug development and precision medicine. 
    2. Discuss methodologies and best practices used today in this field
    3. Discuss technical challenges such as bias, generalizability, and opacity and how they can lead to issues of data disparity, fairness, and trustworthiness. We will also discuss how to address these challenges.

    AI can be a great tool but also as noted in objective 3 is not without some problems.  I'll be attending to learn more and keep an eye on this important field.

    FDA and CMS issue statement on LDTs: Americans Deserve Accurate and Reliable Diagnostic Tests, Wherever They Are Made

    This joint statement notes the evolution of Laboratory Developed Tests (LDTs) from the initial rule and approach the FDA had for oversite, a...