<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Lauren C. </title>
    <description>The latest articles on Forem by Lauren C.  (@laurenc2022).</description>
    <link>https://forem.com/laurenc2022</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/laurenc2022"/>
    <language>en</language>
    <item>
      <title>Web Scrapper: E-commerce Price Tracker</title>
      <dc:creator>Lauren C. </dc:creator>
      <pubDate>Mon, 11 Nov 2024 02:13:03 +0000</pubDate>
      <link>https://forem.com/laurenc2022/hugging-face-how-to-get-started-with-free-chatgtp-426f</link>
      <guid>https://forem.com/laurenc2022/hugging-face-how-to-get-started-with-free-chatgtp-426f</guid>
      <description>&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Extract data from file using terminal
&lt;/li&gt;
&lt;li&gt;Extract data from a file using a script &lt;/li&gt;
&lt;li&gt;Putting it all together to build a web scraper &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Each step of the tutorial has a purpose. Try not to skip a step, because you will miss valuable skills needed to become a proficient software engineer. These small steps are the skills you need to own your craft. &lt;/p&gt;

&lt;p&gt;Section 1: Extracting data from a file using terminal teaches command line skills and narrowing down on arguments needed to properly extract data.  &lt;/p&gt;

&lt;p&gt;Section 2: Extracting data from a file using a script give learners practice in writing scripts which are precise. In computer science you want to be accurate, because the computer can handle precise. Writing scripts give learners practice in creating accurate scripts, which will be needed for the final step. &lt;/p&gt;

&lt;p&gt;Section 3: Putting it all together to build a web scraper is the final step to show you have mastered the skill of web scraping. You will build a web scraper you can now use to crawl the website. You can gather information or build scrappers for freelancing clients. &lt;/p&gt;




&lt;h2&gt;
  
  
  How to Build a Web Scrapper
&lt;/h2&gt;

&lt;p&gt;Assignment: create a E-commerce Price Tracker&lt;/p&gt;

&lt;h3&gt;
  
  
  Section 1:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;select 1 website &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For this tutorial I will be using LinkedIn. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Download 1 job listing html page onto your computer: wget &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Website pages can be downloaded as html files by right clickin on the page, selecting "View page source" then right click again and "Save as". There is another way using wget. &lt;/p&gt;

&lt;p&gt;Wget is a command line application which will save the file onto our computer at our current directory. The -O flag is used to specify the output file name we select. You can see more informaiton about wget and other flags to use by using the help command below: &lt;/p&gt;

&lt;p&gt;$wget -O name_your_file.html &lt;a href="http://www.websiteYouAreScraping.com" rel="noopener noreferrer"&gt;www.websiteYouAreScraping.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;$ wget --help &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Using command line applications
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;grep is a command line application which searches a file for a specified term. The vertical bar (|) feeds the output for the grep command into the wc command. wc is another terminal application, but this application can count the number of lines, words, and characters in the input data: &lt;/p&gt;

&lt;p&gt;$grep "search term" file_name.html | wc -l &lt;/p&gt;

&lt;p&gt;Sort will organize the data. It takes in the output of the grep command, orders it and outputs sorted data. the carrot symble (&amp;gt;) pipes the sorted data into a new file you specify. &lt;/p&gt;

&lt;p&gt;$grep "search term" file_name.html | sort &amp;gt; output_file.txt&lt;/p&gt;

&lt;h3&gt;
  
  
  Alias Commands &amp;amp; Shortening default prompt
&lt;/h3&gt;

&lt;p&gt;When working with a long command which you use often, it can be useful to create an alias for that command. Alias commands for the terminal can be made by adding the comment to the .bash_profile or .bashrc file &lt;/p&gt;

&lt;p&gt;When the prompt in the terminal becomes long, because the list of directories are all shown, it can be easily shortened. Use the comment export PS1=&lt;code&gt;[\!:\w]$&lt;/code&gt;. The forward slash exclamation point and the forward slash w are special characters meaning which parts of omit and which parts to include in the prompt. You can read more on &lt;a href="https://ss64.com/bash/syntax-prompt.html" rel="noopener noreferrer"&gt;https://ss64.com/bash/syntax-prompt.html&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Section 2:
&lt;/h3&gt;

&lt;h3&gt;
  
  
  Section 3:
&lt;/h3&gt;




&lt;h2&gt;
  
  
  Closing Remarks
&lt;/h2&gt;

&lt;p&gt;Now that you have completed the tutorial, I encourage you to select a website and practice scraping data from it. Try using a web scraping framework to or get paid for your new skills by finding freelancing work. Go onto UpWork.com or Freelancer.com and try to create a web scraper for a client and get paid for your skills. &lt;/p&gt;

&lt;h2&gt;
  
  
  Web Scrapper Project Ideas
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;E-commerce Price Tracker&lt;/strong&gt;: Scrape product prices from various e-commerce websites and track price changes over time.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Job Listings Aggregator&lt;/strong&gt;: Extract job listings from multiple job portals and compile them into a single, searchable database.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Social Media Sentiment Analysis&lt;/strong&gt;: Scrape social media platforms for mentions of a particular brand or product and analyze the sentiment of the comments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Real-Time News Aggregator&lt;/strong&gt;: Extract news articles from various news websites and create a real-time news feed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stock Market Trend Analysis&lt;/strong&gt;: Scrape stock market data and analyze trends to predict future movements.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Recipe Recommendation Engine&lt;/strong&gt;: Extract recipes from cooking websites and create a recommendation engine based on user preferences.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automated Travel Itinerary Planner&lt;/strong&gt;: Scrape travel websites for flight, hotel, and activity information to create personalized travel itineraries.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sports Statistics Collector&lt;/strong&gt;: Extract sports statistics from various sources and create a comprehensive database for analysis.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Product Review Aggregator&lt;/strong&gt;: Scrape product reviews from e-commerce sites and compile them into a single, searchable database.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Real Estate Market Analysis&lt;/strong&gt;: Extract real estate listings and analyze market trends to provide insights for buyers and sellers.&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>challenge</category>
      <category>beginners</category>
      <category>programming</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Hugging Face: Interacting with Roberta and Hugging Face for the first time</title>
      <dc:creator>Lauren C. </dc:creator>
      <pubDate>Sat, 14 Sep 2024 23:41:49 +0000</pubDate>
      <link>https://forem.com/laurenc2022/hugging-face-interacting-with-roberta-and-hugging-face-for-the-first-time-3feb</link>
      <guid>https://forem.com/laurenc2022/hugging-face-interacting-with-roberta-and-hugging-face-for-the-first-time-3feb</guid>
      <description>&lt;p&gt;This is my first interaction with Hugging Face. The free and open Ai option &lt;/p&gt;

&lt;p&gt;Date: September 15, 2024&lt;/p&gt;

&lt;p&gt;Prerequisites: I bought a 1T external SSD and created this tutorial using the external hard drive. I am using a windows computer. This is where the tutorial begins. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Change directory to D drive &lt;br&gt;
$D: &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;create directory on external SSD titled version of python and pip I'm using&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;$mkdir python312&lt;br&gt;
$cd python312&lt;br&gt;
$mkdir Scripts&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;install python and pip file from website. Save in python312 directory and install virtual environment with command: &lt;br&gt;
$pip install virtualenv&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;verify it's working by checking the version &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;python --version&lt;br&gt;
pip --version &lt;br&gt;
virtualenv --version&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create virtual environment. Tool is called virtualenv. Name of the virtual env. is my_venv. &lt;/li&gt;
&lt;li&gt;Next command "python" invokes interpreter. -m runs module as script. venv module is invoked and my_venv is name of virtual environment&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;$ virtualenv my_venv &lt;br&gt;
$python -m venv my_venv&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Activate virtual environment &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;$.\my_venv\Scripts\activate&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install Transformers, datasets, and pytourch&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;$pip install transformers&lt;br&gt;
$pip install datasets&lt;br&gt;
$pip install torch&lt;br&gt;
$pip install tensorflow &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;install Visual Studio from Microsoft website to edit python files and create python files &lt;/li&gt;
&lt;li&gt;Open Visual Studio Installer &lt;/li&gt;
&lt;li&gt;Select Modify and add Python development by checking the box &lt;/li&gt;
&lt;li&gt;&lt;p&gt;click modify &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;in Scripts directory create new application in visual studios. Save in script file in D drive. Name "roberta_hugging_face" and save  solution and project in same directory &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;add code to file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from transformers import pipeline

# Load the sentiment analysis pipeline
sentiment_pipeline = pipeline("sentiment-analysis")

# Perform sentiment analysis
result = sentiment_pipeline("Hugging Face is creating amazing tools for NLP!")
print(result)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;run file. You should be in: &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;D:\python312\Scripts\roberta_hugging_face&lt;/p&gt;

&lt;p&gt;$python roberta_hugging_face.py&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;install backward compatible package and retry running &lt;br&gt;
$pip install tf-keras&lt;br&gt;
$python roberta_hugging_face.py&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When download is complete you will see the result for the pipeline "Hugging Face is creating amazing tools for NLP!"&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To deactivate&lt;br&gt;
$deactivate&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Safely disconnect SSD  &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>python</category>
      <category>huggingface</category>
      <category>openai</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Pedagogy for Self Taught Developers: Introduction to Identifying Your Technical Skill Level with Bloom’s Taxonomy</title>
      <dc:creator>Lauren C. </dc:creator>
      <pubDate>Tue, 08 Aug 2023 15:10:58 +0000</pubDate>
      <link>https://forem.com/laurenc2022/intro-to-self-identifying-your-technical-skills-with-blooms-taxonomy-a-series-of-blog-posts-2e6f</link>
      <guid>https://forem.com/laurenc2022/intro-to-self-identifying-your-technical-skills-with-blooms-taxonomy-a-series-of-blog-posts-2e6f</guid>
      <description>&lt;p&gt;&lt;strong&gt;Intro&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When you're stuck in tutorial hell, the right answer is &lt;em&gt;not&lt;/em&gt; to just keep on going. Developers need a real way to measure growth in a coding skill. Thanks to the work of teachers, and educators we &lt;em&gt;can&lt;/em&gt; identify growth. We can be purposeful about the way we learn, so to maximize growth and minimize time needed to master the skill. Learning hard things isn't new. Educators have been teaching tough topics since the dawn of education.  &lt;/p&gt;

&lt;p&gt;If you are new to code or you are leading a team of junior developers, everyone could use a marker to measure progress. Understanding what actionable steps can take you from zero to mastery will help you reach success sooner rather than later. &lt;/p&gt;

&lt;p&gt;Bloom's taxonomy is a well respected system of classification taught in every college of education across America. Simply put, Bloom's taxonomy is a tool for teachers to identify which activities to best support student's growth. &lt;/p&gt;

&lt;p&gt;As a blog post series, I will be diving into Bloom's taxonomy for real world application for learning a new technical skill. Anyone should be able to use this blog post series for any technical skill and see growth in their understanding. Seems like a lofty goal for a blog, but Bloom's Taxonomy has been used for many decades. Bloom's taxonomy can be used by anyone learning highly technical skills.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Articles&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In the Articles section, I will summarize one to three high quality reading on the topic of Bloom's taxonomy as it relates to computer science education. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Actionable Steps&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this section I will bullet some tasks anyone could work on to grow and show their understanding. An example of a deliverable could be summarize a video tutorial or completing a coding challenge. I will provide some PDF templates to help you get started. &lt;/p&gt;

&lt;p&gt;Actionable Steps are meant to get you started working quickly. Feel free to make up your own action items. If you do, please share links to what you have built or learned in the comments below. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;High Quality Learning Materials&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Sometimes Bloom's taxonomy can be seen in learning materials online today. When I find good learning materials, I will share them with you. I will also give information on why I believe it is a good tool to learn from. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Final Mentions&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Each blog post follow the same format: Intro, Academic Articles, Actionable Steps, High Quality Learning Materials, and Final Mentions. The blog posts in this series will keep to these sections. &lt;/p&gt;

&lt;p&gt;My goal for writing this series is to help anyone easily apply Blooms taxonomy in a straight forward way and for readers to feel a sense of growth in their learning. &lt;/p&gt;

&lt;p&gt;Image source: Vanderbilt University Center for Teaching (&lt;a href="https://flic.kr/p/LQuqT2"&gt;Click here for link to image&lt;/a&gt;) / (&lt;a href="https://creativecommons.org/licenses/by/2.0/"&gt;Click here for link to image license&lt;/a&gt;)&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>learning</category>
      <category>coding</category>
      <category>computerscience</category>
    </item>
  </channel>
</rss>
