Feeling impressed to write down your first TDS submit? We’re always open to contributions from new authors.
Taking step one in the direction of mastering a brand new subject is all the time a bit daunting—generally it’s even very daunting! It doesn’t matter should you’re studying about algorithms for the primary time, dipping your toes into the thrilling world of LLMs, or have simply been tasked with revamping your workforce’s knowledge stack: taking up a problem with little or no prior expertise requires nontrivial quantities of braveness and grit.
The calm and nuanced perspective of extra seasoned practitioners can go a great distance, too — which is the place our authors excel. This week, we’ve gathered a few of our standout current contributions which can be tailor-made particularly to the wants of early-stage learners trying to increase their ability set. Let’s roll up our sleeves and get began!
- From Parallel Computing Principles to Programming for CPU and GPU Architectures
For freshly minted knowledge scientists and ML engineers, few areas are extra essential to grasp than reminiscence fundamentals and parallel execution. Shreya Shukla’s thorough and accessible information is the right useful resource to get a agency footing on this subject, specializing in the right way to write code for each CPU and GPU architectures to perform elementary duties like vector-matrix multiplication. - Multimodal Models — LLMs That Can See and Hear
When you’re feeling assured in your data of LLM fundamentals, why not take the following step and discover multimodal fashions, which may absorb (and in some instances, generate) a number of types of knowledge—from pictures to code and audio? Shaw Talebi’s primer, the primary a part of a brand new sequence, affords a strong basis from which to construct your sensible know-how. - Boosting Algorithms in Machine Learning, Part II: Gradient Boosting
Whether or not you’ve solely not too long ago began your ML journey or have been at it for thus lengthy {that a} refresher could be helpful, it’s by no means a foul thought to agency up your data of the fundamentals. Gurjinder Kaur’s ongoing exploration of boosting algorithms is a superb living proof, presenting accessible, easy-to-digest breakdowns of among the strongest fashions on the market—on this case, gradient boosting.
- NLP Illustrated, Part 1: Text Encoding
One other new challenge we’re thrilled to share with our readers? Shreya Rao’s just-launched sequence of illustrated guides to core ideas in pure language processing, the very know-how powering lots of the fancy chatbots and AI apps which have made a splash in recent times. Half one zooms in on a necessary step in nearly any NLP workflow: turning textual knowledge into numerical inputs through textual content encoding. - Decoding One-Hot Encoding: A Beginner’s Guide to Categorical Data
When you’re seeking to study one other type of knowledge transformation, don’t miss Vyacheslav Efimov’s clear and concise introduction to one-hot encoding, “some of the elementary methods used for knowledge preprocessing,” turning categorical options into numerical vectors. - Excel Spreadsheets Are Dead for Big Data. Companies Need More Python Instead.
One sort of transition that’s usually much more troublesome than studying a brand new subject is switching to a brand new software or workflow—particularly when the one you’re shifting away from suits squarely inside your consolation zone. As Ari Joury, PhD explains, nonetheless, generally a short lived sacrifice of velocity and ease of use is value it, as within the case of adopting Python-based knowledge instruments as a substitute of Excel spreadsheets.