The HuggingFace transformer library affords many fundamental constructing blocks and a wide range of performance to kickstart your AI code. Many merchandise and libraries have been constructed on high of it and on this brief weblog, I’ll discuss among the methods folks have prolonged it so as to add customized coaching code on high of the HuggingFace transformer’s library:
- Reimplement the coaching code by iterating via the coaching knowledge to recreate the fine-tuning loop after which including in customized code, and
- Creating customized callbacks tacked onto the Trainer class in order that customized code be added to the callbacks.
Clearly, there could also be different methods to customise the fine-tuning loop, however this weblog is meant to give attention to these two approaches.
Sometimes whenever you prepare a mannequin, a Trainer object is created that lets you specify the parameters for coaching a mannequin. The Coach object surfaces a prepare() methodology you could name that initiates the coaching loop: