Figure 3. ipf-P3GPT model architecture. The transformer model features a 64-dimensional embedding layer, three transformer blocks with dual attention heads, and a vocabulary-sized output layer. The model processes disease, age group, and compound treatment comparisons using a custom XML-aware tokenizer, achieving 72.2-75.9% validation accuracy across instruction types.