The FDA made a post that presents some collaborative research to emphasize the value of the techniques employed. Using AI and machine learning and convolutional neural networks (CNN) The FDA looked at biotherapeutic products with Univ. of Colorado and NIST to look at images of the products for subvisible particles using flow imaging microscopy. These particles are typically generated through stress of the product and are of concern as they may be protein aggregates with the potential to cause immunogenicity.
Flow imaging microscopy can generate 100's of images that are cumbersome to process manually. The researchers used unstressed and stressed lots of biotherapeutics to teach the AI with the intent of having the AI be able to determine if a subvisible particle characteristics can be monitored to show the extent of stress the sample has undergone. Ultimately, by using different stressors, the system will be able to identify those that have the greatest potential to form aggregates and allow formulators to introduce changes that minimize aggregate formation.
The process employed with the CNN processing of the image is described in "Figure 2. Basic CNN workflow" in the posting:
A CNN is used as an image “classifier”, i.e., the network is intended to process an image of a single particle and predict if that particle comes from one of the two classes: “Stressed” or “Non-Stressed”. (Note that for stressed condition, the model protein solution is kept under shaking for 7 days at ambient temperature, non-stressed protein solution is kept at ambient temperature without shaking stress). To train (i.e., estimate the most discriminatory parameters) this classifier, a large collection of images properly labeled as stressed or unstressed was used. The first step is pre-processing of these FIM images (resizing, normalization, segmentation, etc.) to generate image batches for efficient processing. Then the CNN sequentially passes the batches of images through several “convolutional layers.” Within each convolutional layer, a “filter” (which is itself a small 2D image) is convolvedi with the input image. The parameters of the filters are determined by optimizing a measure that is specific to the task at hand (e.g., a binary cross-entropy lossii in the image classification task shown here). Once all model parameters are estimated (or “learned”), the CNN can process new images in a feed forward fashion. That is, in each convolutional layer, a new set of filters (whose parameters were determined in the “learning phase”) are convolved with the input images from the previous layers producing new “activation images” which serve as input images for the next layer (usually with a smaller size and increased number of channels compared to the images of the previous layer). After passing through all the convolution filter layers, the resulting activation images are typically passed to a fully connected artificial neural network to extract the final “data-driven” features.
More details are in the post and the reference publications.
No comments:
Post a Comment