Home AI 7 AI Portfolio Tasks to Enhance the Resume

7 AI Portfolio Tasks to Enhance the Resume

0
7 AI Portfolio Tasks to Enhance the Resume



Picture by Writer

 

I actually imagine that to get employed within the subject of synthetic intelligence, it’s worthwhile to have a robust portfolio. This implies it’s worthwhile to present the recruiters you could construct AI fashions and functions that remedy real-world issues.

On this weblog, we are going to evaluate 7 AI portfolio initiatives that may enhance your resume. These initiatives include tutorials, supply code, and different supportive supplies that can assist you construct correct AI functions.

 

1. Construct and Deploy your Machine Studying Software in 5 Minutes

 

Venture hyperlink: Construct AI Chatbot in 5 Minutes with Hugging Face and Gradio

 


Screenshot from the mission

 

On this mission, you can be constructing a chatbot software and deploying it on Hugging Face areas. It’s a beginner-friendly AI mission that requires minimal data of language fashions and Python. First, you’ll be taught numerous elements of the Gradio Python library to construct a chatbot software, after which you’ll use the Hugging Face ecosystem to load the mannequin and deploy it. 

It’s that straightforward.

 

2. Construct AI Tasks utilizing DuckDB: SQL Question Engine

 

Venture hyperlink: DuckDB Tutorial: Constructing AI Tasks

 


Screenshot from the mission

 

On this mission, you’ll be taught to make use of DuckDB as a vector database for an RAG software and in addition as an SQL question engine utilizing the LlamaIndex framework. The question will take pure language enter, convert it into SQL, and show the end in pure language. It’s a easy and easy mission for learners, however earlier than you dive into constructing the AI software, it’s worthwhile to be taught a couple of fundamentals of the DuckDB Python API and the LlamaIndex framework.

 

3. Constructing A number of-step AI Agent utilizing the LangChain and Cohere API

 

Venture hyperlink: Cohere Command R+: A Full Step-by-Step Tutorial

 


Screenshot from the mission

 

Cohere API is healthier than OpenAI API  when it comes to performance for growing AI functions. On this mission, we are going to discover the varied options of Cohere API and be taught to create a multi-step AI agent utilizing the LangChain ecosystem and the Command R+ mannequin. This AI software will take the consumer’s question, search the net utilizing the Tavily API, generate Python code, execute the code utilizing Python REPL, after which return the visualization requested by the consumer. That is an intermediate-level mission for people with primary data and serious about constructing superior AI functions utilizing the LangChain framework.

 

4. Advantageous-Tuning Llama 3 and Utilizing It Regionally

 

Venture hyperlink: Advantageous-Tuning Llama 3 and Utilizing It Regionally: A Step-by-Step Information | DataCamp

 


Picture from the mission

 

A well-liked mission on DataCamp that may enable you to fine-tune any mannequin utilizing free assets and convert the mannequin to Llama.cpp format in order that it may be used domestically in your laptop computer with out the web. You’ll first be taught to fine-tune the Llama-3 mannequin on a medical dataset, then merge the adapter with the bottom mannequin and push the complete mannequin to the Hugging Face Hub. After that, convert the mannequin recordsdata into the Llama.cpp GGUF format, quantize the GGUF mannequin and push the file to Hugging Face Hub. Lastly, use the fine-tuned mannequin domestically with the Jan software.

 

5. Multilingual Computerized Speech Recognition

 

Mannequin Repository: kingabzpro/wav2vec2-large-xls-r-300m-Urdu

Code Repository: kingabzpro/Urdu-ASR-SOTA

Tutorial Hyperlink: Advantageous-Tune XLSR-Wav2Vec2 for low-resource ASR with 🤗 Transformers

 


Screenshot from kingabzpro/wav2vec2-large-xls-r-300m-Urdu

 

My hottest mission ever! It will get nearly half one million downloads each month. I fine-tuned the Wave2Vec2 Giant mannequin on an Urdu dataset utilizing the Transformer library. After that, I improved the outcomes of the generated output by integrating the language mannequin.

 


Screenshot from Urdu ASR SOTA – a Hugging Face House by kingabzpro

 

On this mission, you’ll fine-tune a speech recognition mannequin in your most well-liked language and combine it with a language mannequin to enhance its efficiency. After that, you’ll use Gradio to construct an AI software and deploy it to the Hugging Face server. Advantageous-tuning is a difficult activity that requires studying the fundamentals, cleansing the audio and textual content dataset, and optimizing the mannequin coaching.

 

6. Constructing CI/CD Workflows for Machine Studying Operations

 

Venture hyperlink: A Newbie’s Information to CI/CD for Machine Studying | DataCamp

 


Picture from the mission

 

One other common mission on GitHub. It includes constructing a CI/CD pipeline or machine studying operations. On this mission, you’ll find out about machine studying mission templates and methods to automate the processes of mannequin coaching, analysis, and deployment. You’ll find out about MakeFile, GitHub Actions, Gradio, Hugging Face, GitHub secrets and techniques, CML actions, and numerous Git operations. 

Finally, you’ll construct end-to-end machine studying pipelines that may run when new knowledge is pushed or code is up to date. It would use new knowledge to retrain the mannequin, generate mannequin evaluations, pull the skilled mannequin, and deploy it on the server. It’s a totally automated system that generates logs at each step.

 

7. Advantageous-tuning Steady Diffusion XL with DreamBooth and LoRA

 

Venture hyperlink: Advantageous-tuning Steady Diffusion XL with DreamBooth and LoRA | DataCamp

 


Picture from the mission 

 

We’ve realized about fine-tuning massive language fashions, however now we are going to fine-tune a Generative AI mannequin utilizing private images. Advantageous-tuning Steady Diffusion XL requires only some photos and, because of this, you may get optimum outcomes, as proven above.

On this mission, you’ll first find out about Steady Diffusion XL after which fine-tune it on a brand new dataset utilizing Hugging Face AutoTrain Advance, DreamBooth, and LoRA. You may both use Kaggle free of charge GPUs or Google Colab. It comes with a information that can assist you each step of the way in which.

 

Conclusion

 

The entire initiatives talked about on this weblog have been constructed by me. I made positive to incorporate a information, supply code, and different supporting supplies. 

Engaged on these initiatives provides you with useful expertise and enable you to construct a robust portfolio, which may enhance your probabilities of securing your dream job. I extremely suggest everybody to doc their initiatives on GitHub and Medium, after which share them on social media to draw extra consideration. Preserve working and maintain constructing; these experiences may also be added to your resume as an actual expertise.
 
 

Abid Ali Awan (@1abidaliawan) is a licensed knowledge scientist skilled who loves constructing machine studying fashions. At present, he’s specializing in content material creation and writing technical blogs on machine studying and knowledge science applied sciences. Abid holds a Grasp’s diploma in know-how administration and a bachelor’s diploma in telecommunication engineering. His imaginative and prescient is to construct an AI product utilizing a graph neural community for college kids scuffling with psychological sickness.

Exit mobile version