I focus on transformer-based generative models, studying how self-attention mechanisms enable large language models to understand and create human-like text. My research explores both theoretical and practical aspects—improving model efficiency, interpretability, and alignment with human values—while developing real-world applications that leverage transformers for creative content generation, knowledge extraction, and data-driven decision-making.