<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Andrew Lucker</title>
    <description>The latest articles on Forem by Andrew Lucker (@andrewlucker).</description>
    <link>https://forem.com/andrewlucker</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/andrewlucker"/>
    <language>en</language>
    <item>
      <title>What is IDF and how is it calculated?</title>
      <dc:creator>Andrew Lucker</dc:creator>
      <pubDate>Sun, 24 Sep 2017 05:53:23 +0000</pubDate>
      <link>https://forem.com/andrewlucker/what-is-idf-and-how-is-it-calculated</link>
      <guid>https://forem.com/andrewlucker/what-is-idf-and-how-is-it-calculated</guid>
      <description>

&lt;p&gt;Inverse Document Frequency&lt;/p&gt;

&lt;p&gt;IDF is one of the most basic terms of modern search engine relevance calculation. It is used to determine how &lt;em&gt;rare&lt;/em&gt; a term is and how relevant it is to the original query. For example take the query “the Golden State Warriors”. This query is difficult because there is no invidual word that identifies our intention to search for a basketball team. Instead we need to look at groups of words and weigh how relevant each set is to the overall query. This is the basics of flat search query relevance and it all starts with IDF.&lt;/p&gt;

&lt;p&gt;Before we can calculate IDF we need to associate each document or query with a set of features. For this tutorial we will use only &lt;em&gt;n-grams&lt;/em&gt;. An n-gram is one or more words. We can use python’s string methods to quickly extract features from a document or query.&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def extract_features( document ):
    terms = tuple(document.lower().split())
    features = set()
    for i in range(len(terms)):
    for n in range(1,4):
    if i+n &amp;lt;= len(terms):
    features.add(terms[i:i+n])
    return features

print(extract_features(â€˜The Golden State Warriors’))
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Next we need to calculate Document Frequency, then invert it. The formula for IDF starts with the total number of documents in our database: N. Then we divide this by the number of documents containing our term: tD. This will never result in a number less than 1, because 1 indicates that the term is present in all documents, there is no document frequency more common than that limit. Next we normally take the logarithm of this whole term, because we may be indexing billions of documents, and the IDF can get pretty unwieldy unless we refer to it in terms of order of magnitude.&lt;/p&gt;

&lt;p&gt;Here we can calculate the IDF for all of our features in a small database of documents.&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def extract_features( document ):
   terms = tuple(document.lower().split())
   features = set()
   for i in range(len(terms)):
      for n in range(1,4):
          if i+n &amp;lt;= len(terms):
              features.add(terms[i:i+n])
   return features

documents = [
   "This article is about the Golden State Warriors",
   "This article is about the Golden Arches",
   "This article is about state machines",
   "This article is about viking warriors"]

def calculate_idf( documents ):
   N = len(documents)
   from collections import Counter
   tD = Counter()
   for d in documents:
      features = extract_features(d)
      for f in features:
          tD[" ".join(f)] += 1
   IDF = []
   import math
   for (term,term_frequency) in tD.items():
       term_IDF = math.log(float(N) / term_frequency)
       IDF.append(( term_IDF, term ))
   IDF.sort(reverse=True)
   return IDF

for (IDF, term) in calculate_idf(documents):
    print(IDF, term)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;As you can see in the output, rare terms are assigned higher IDF and thus can be weighted higher in relevancy calculation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VDJNMpbZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1000/1%2ApDGJo0U0-7WM-yLQC4cYcg.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VDJNMpbZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1000/1%2ApDGJo0U0-7WM-yLQC4cYcg.jpeg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;


</description>
      <category>datascience</category>
      <category>computerscience</category>
      <category>libraryscience</category>
      <category>algorithms</category>
    </item>
    <item>
      <title>(not very) Deep Learning</title>
      <dc:creator>Andrew Lucker</dc:creator>
      <pubDate>Fri, 22 Sep 2017 20:52:58 +0000</pubDate>
      <link>https://forem.com/andrewlucker/not-very-deep-learning</link>
      <guid>https://forem.com/andrewlucker/not-very-deep-learning</guid>
      <description>

&lt;p&gt;Tensorflow is not a unicorn, its just another tool&lt;/p&gt;

&lt;p&gt;For the last year I’ve been playing around with different algorithms for playing ATARI games through the OpenAI platform. The nice thing about these games is that all actions (or inaction) are deterministic. If you make all the same choices then you will get the same result.&lt;/p&gt;

&lt;p&gt;So far the best performance I’ve found is the A3C algorithm. A3C is similar to DQN however it uses many similar variants of the same strategy to learn what actions or insights have large impacts on value or what information can be safely ignored. This is good enough to solve a few of the simpler problems, however it fails to gain any deeper insight into complex environments with specific trouble at object recognition.&lt;/p&gt;

&lt;p&gt;A good example is the ATARI River Raid environment. Here is one of my A3C bots playing:&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/a4wxAWY-Dgk"&gt; &lt;/iframe&gt;&lt;/p&gt;

&lt;p&gt;If you watch the clip you will see that the bot adopts the basic strategy of “avoid obstacles and keep shooting”. This would be a good strategy if not for the concept of â€˜fuel’ that the game creates. In order to progress further in levels there is a deferred need to 1) not shoot the fuel cartridges and 2) collect them by touching them. The problem is that there is no near-term penalty to ignoring the fuel charges, and thus the agent never learns that they are important. This bot has been fairly well trained, so I would suspect that more training would not help this problem much. It is fundamentally a problem of breadth vs depth of value search.&lt;/p&gt;

&lt;p&gt;What I would like to see more in the AI research space is methods that combine Object and Feature Recognition with Policy Reinforcement Learning. This combination would help simplify these game environments and help the agent cut through the immense redundancy in each environment. Atari games are not like Go. Each pixel is not critically important. Usually objects are larger sprites with the exception of bullets and flak.&lt;/p&gt;

&lt;p&gt;So that is where I am stuck right now. I will be trying to do some studying of object recognition myself and see if I can rectify that research with the policy learning side. Hopefully we will see deeper soon, the current pace is encouraging.&lt;/p&gt;


</description>
      <category>youtube</category>
      <category>computerscience</category>
      <category>ai</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>Search Engine Optimization en vogue</title>
      <dc:creator>Andrew Lucker</dc:creator>
      <pubDate>Thu, 21 Sep 2017 22:22:41 +0000</pubDate>
      <link>https://forem.com/andrewlucker/search-engine-optimization-en-vogue</link>
      <guid>https://forem.com/andrewlucker/search-engine-optimization-en-vogue</guid>
      <description>&lt;p&gt;All SEO claims should come with a warranty&lt;/p&gt;

&lt;p&gt;There is a lot of contradictory advice on SEO. For example something as simple as whether http or https is better is up for debate. Google has themselves announced that they will &lt;a href="https://webmasters.googleblog.com/2014/08/https-as-ranking-signal.html" rel="noopener noreferrer"&gt;reward sites for using HTTPS/SSL&lt;/a&gt;. Due to Google’s dominance in the search engine space it may be safe to assume that HTTPS will improve your rank. However a conflicting ranking variable is site loading speed. HTTPS can prevent intermediate caching and requires a more lengthy handshake to complete a transaction. This is the reason that Google developed the SPDY protocol. So now we have three options to which it is not clear what mix of speed and safety will achieve the greatest respect in Google’s algorithm. That is just one search engine, other portals or engines may analyze your site very differently. Personally I have not found any statistically significant difference between HTTP or HTTPS with regards to search engine ranking.&lt;/p&gt;

&lt;p&gt;For more practical advice it is always best to consider what produces the best customer experience. It should be assumed that this is the goal of both the portal and site. So let’s breakdown a few features that improve site usability: speed, portal navigation, site navigation, content quality, and traffic quality.&lt;/p&gt;

&lt;p&gt;Speed is easy enough to measure. Google provides tools to &lt;a href="https://developers.google.com/speed/pagespeed/" rel="noopener noreferrer"&gt;analyze your site speed&lt;/a&gt; and find ways to improve. Good advice is simply engineering your site well and using a CDN.&lt;/p&gt;

&lt;p&gt;Portal navigation depends on what context your site will be appearing. The two most common contexts would be Google and Facebook. For both it is good advice to have a readable title and page meta description. For Facebook a good image is also helpful. Mobile consideration is also important, and again Google &lt;a href="https://search.google.com/test/mobile-friendly" rel="noopener noreferrer"&gt;has a tool to help with that&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Site navigation is up to your designer. For crawler accessibility it is also important to make all your public links such that the spider won’t wrap itself into a loop. Also unique pages should not be hidden behind URL parameters.&lt;/p&gt;

&lt;p&gt;Content Quality is probably the most important of all considerations, but also has no singular solution. Track your users and make sure that your site is engaging and users are staying on the site for longer and returning. Search engines have lots of information regarding entry/exit, so good traffic patterns are rewarded.&lt;/p&gt;

&lt;p&gt;Traffic Quality is simple enough. Don’t spam and you won’t get spammed. Don’t buy traffic and avoid fraudulent clicks. Moz has a tool to help see &lt;a href="https://moz.com/researchtools/ose/" rel="noopener noreferrer"&gt;how reliable your upstream links are&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I purposely left out Page Rank, because it has been reported as less and less of a rank predictor. Google is moving to new algorithms that look more at consumer experience and less at in-bred linking schemes.&lt;/p&gt;

&lt;p&gt;That is all I can think of for now. If anyone has tips feel free to leave them in the comments below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F953%2F1%2A8YnHyDB_ZYDzLOh4ZLwqYQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F953%2F1%2A8YnHyDB_ZYDzLOh4ZLwqYQ.png"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>algorithms</category>
      <category>contentstrategy</category>
      <category>contentmarketing</category>
      <category>seo</category>
    </item>
    <item>
      <title>Are computers faster than the human brain?</title>
      <dc:creator>Andrew Lucker</dc:creator>
      <pubDate>Wed, 20 Sep 2017 23:46:54 +0000</pubDate>
      <link>https://forem.com/andrewlucker/are-computers-faster-than-the-human-brain</link>
      <guid>https://forem.com/andrewlucker/are-computers-faster-than-the-human-brain</guid>
      <description>&lt;p&gt;Well, it depends what you are measuring …&lt;/p&gt;

&lt;p&gt;There &lt;a href="https://spectrum.ieee.org/tech-talk/computing/networks/estimate-human-brain-30-times-faster-than-best-supercomputers" rel="noopener noreferrer"&gt;have&lt;/a&gt; &lt;a href="https://www.scientificamerican.com/article/computers-vs-brains/" rel="noopener noreferrer"&gt;been&lt;/a&gt; &lt;a href="http://bgr.com/2016/02/27/power-of-the-human-brain-vs-super-computer/" rel="noopener noreferrer"&gt;many&lt;/a&gt; &lt;a href="https://www.forbes.com/sites/quora/2016/03/02/how-powerful-is-the-human-brain-compared-to-a-computer/#658e7d86628e" rel="noopener noreferrer"&gt;attempts&lt;/a&gt; to quantify the question of whether brains or machines are faster, however the results are apples and oranges. To really communicate the difference there needs to be some commonality to which we can compare. There are several ways we can test processing speed, so let’s jump in.&lt;/p&gt;

&lt;p&gt;For starters we can look at something unfair for humans: arithmetic. A good fifth grader should be able to answer addition, subtraction, and multiplication tables at a rate of about one answer per second. A 1GHz computer, like your old cellphone, can do a billion per second. So clearly conscious arithmetic is not our strong point.&lt;/p&gt;

&lt;p&gt;However, let’s turn that number on its face: how many operations per second does our brain do in our subconscious visual cortex? If you imagine a 3D figure and rotate, zoom, or transform it, you should find that it isn’t that hard and it all happens in realtime. The Graphics Processing Unit of the human imagination still puts high-end computer GPUs to shame. This is strange though, there are only &lt;a href="https://en.wikipedia.org/wiki/Visual_cortex" rel="noopener noreferrer"&gt;~140 million neurons&lt;/a&gt; on each lobe of the visual cortex, however there are over &lt;a href="https://en.wikipedia.org/wiki/Transistor_count" rel="noopener noreferrer"&gt;30 billion transistors&lt;/a&gt; on the latest processors. Clearly our brains are more efficient at this sort of processing, but is that fair to compare hardware without attention to software.&lt;/p&gt;

&lt;p&gt;So we see that humans are in fact running on out-of-date hardware, but we are much more efficient where it counts. Today’s computers on the other hand are beastly machines, but the software/hardware mix is very inefficient and slow. The result is that humans still have a significant processing advantage in areas that we have already adapted to. For other games that don’t come naturally to us, we are outmatched. For all &lt;a href="https://qz.com/993147/the-awful-frustration-of-a-teenage-go-champion-playing-googles-alphago/" rel="noopener noreferrer"&gt;deterministic games of historical significance&lt;/a&gt;, humans play orders of magnitude less proficiently than modern AIs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F400%2F1%2A0A72T-1-CFRhDE5I-8pmPQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F400%2F1%2A0A72T-1-CFRhDE5I-8pmPQ.png"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>psychology</category>
      <category>deeplearning</category>
      <category>ai</category>
      <category>gametheory</category>
    </item>
    <item>
      <title>What is necessary for next steps in deeper learning?</title>
      <dc:creator>Andrew Lucker</dc:creator>
      <pubDate>Tue, 19 Sep 2017 23:25:09 +0000</pubDate>
      <link>https://forem.com/andrewlucker/what-is-necessary-for-next-steps-in-deeper-learning</link>
      <guid>https://forem.com/andrewlucker/what-is-necessary-for-next-steps-in-deeper-learning</guid>
      <description>

&lt;p&gt;You can explore the related code snippets &lt;a href="https://tech.io/playgrounds/2e3ad5751ba8a756462ece41ee0b739d6925/activating-latent-semantic-models-with-natural-language-inference/input-and-output"&gt;here on tech.io&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Deep Learning has introduced methods capable of extracting latent variables from an amazing variety of inputs. Raw facial image data can be quickly converted into features such as emotional expression, face orientation, or even suspect identity. Deep Learning has similarly proven capable in the application of motor control and near-term action planning. With these technologies, and an abundance of training, we are discernably closer to Sci-Fi level Artifical Intelligence. However, there remains a large gap between the &lt;em&gt;input&lt;/em&gt; and &lt;em&gt;output&lt;/em&gt; applications. Here we propose a Natural Language model capable of interfacing between the two applications in the most general sense. The basic outline is that natural language becomes the &lt;em&gt;output&lt;/em&gt; of recognizers and &lt;em&gt;input&lt;/em&gt; of planners. The model must also take into account the appropriate usage of various models available.&lt;/p&gt;

&lt;p&gt;To start let’s imagine an app that maps the emotions of a user’s face caught through a webcam onto that of a generic face. This app could be written as a simple mapping of happy/sad represented as a number: positive is happy, negative is sad. This is a bit restrictive because emotions are more complex. Specifically, there are considered to be &lt;a href="http://www.cell.com/current-biology/abstract/S0960-9822(13)01519-4"&gt;four base emotional states&lt;/a&gt; that can be expressed through facial expression: happiness, sadness, anger, and fear.&lt;/p&gt;

&lt;p&gt;To expand our range of emotions we could instead create a structure of floating point numbers, one for each emotion. For each emotion positive would represent positive confidence, zero would represent no confidence, and negative would represent negative confidence.&lt;/p&gt;

&lt;p&gt;This model is still lacking if we want to include other information such as face orientation. To encode this information we would need another structure for possible information regarding face orientation. To maximize generality we should also consider the case where this information could be missing or partial. At this point we need to consider a more formal grammar. Eliding the specifics of such a grammar, let’s just continue with the assumption that this grammar exists (any human language would suffice to encode this information, for example).&lt;/p&gt;

&lt;p&gt;The final step would be to implement inference, action, and planning. To achieve this we should reflect on what we have so far. The current model is basically just features with confidence values. To complete inference and planning we need two more values associated with each feature: justification direction, and justification distance.&lt;/p&gt;

&lt;p&gt;These last two values are hard to understand without hands-on examples. The simplest to explain of the two is “justification distance”. The process of creating a feature confidence value can be either long or short. It is generally accepted in philosophy, math, and other disciplines that long proofs are more likely to be wrong, unless they were designed to be elaborate. For this reason it helps us to mark confidence values with a length ranging from very short (atomic) to very long (astronomic). At first swing it might seem reasonable to just change the confidence values to account for this phenomena. However, truth and justification length often vary independently, so encoding them together would cause important information to be lost.&lt;/p&gt;

&lt;p&gt;The final feature associated value would be justification direction. This is simply the semantic direction of inference. If the feature should be true due to inference, then the direction is forward (from assumptions to conclusion). If the feature is observed to be true, then the direction is backward (from conclusion to assumptions).&lt;/p&gt;

&lt;p&gt;That is all that this model needs. I’ll be working on integrating these features into &lt;a href="https://www.youtube.com/channel/UCrECJ9ufXImOPeqYxGS9Jtw"&gt;my OpenAI bots&lt;/a&gt;, so watch for updates. These techniques are very necessary for unsupervised learning and that is the most common task available, so there is plenty to explore.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--D-LYsfOe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AFdQn7YDFxkayvngpwrtiEg.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--D-LYsfOe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AFdQn7YDFxkayvngpwrtiEg.jpeg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This post was originally published on &lt;a href="https://hackernoon.com/what-is-necessary-for-next-steps-in-deeper-learning-5b0f0100fcb"&gt;medium.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;


</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>functionalprogrammi</category>
      <category>rust</category>
    </item>
    <item>
      <title>Why does the Pomodoro Technique work so well</title>
      <dc:creator>Andrew Lucker</dc:creator>
      <pubDate>Tue, 19 Sep 2017 04:25:08 +0000</pubDate>
      <link>https://forem.com/andrewlucker/why-does-the-pomodoro-technique-work-so-well</link>
      <guid>https://forem.com/andrewlucker/why-does-the-pomodoro-technique-work-so-well</guid>
      <description>

&lt;p&gt;It trains your brain to “flow” and focus&lt;/p&gt;

&lt;p&gt;During high school I liked to try different &lt;em&gt;brain hacks&lt;/em&gt;. My favorite was what I called &lt;em&gt;triggers&lt;/em&gt;. A brain trigger is simply a deferred memory: “when I get to homeroom, take out my physics book and open it to page 146”. Phrasing an action as the posterior of a condition is one of the most basic building blocks in the human brain. In computers this would be the &lt;em&gt;if statement&lt;/em&gt;. However the brain version is more powerful because this structure is time sensitive. It is due to this time sensitivity that the Pomodoro Technique works so well. Pomodoro takes one of our most primitive control structures and aligns it with a higher state of mind: &lt;strong&gt;task focus&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In a previous post I talked about how scheduled events can &lt;a href="https://dev.to/andrewlucker/why-do-programmers-wear-headphones-for-the-same-reason-that-you-cantjuggle"&gt;cloud your working memory&lt;/a&gt; and make you lose focus. The Pomodoro Technique is interesting because it uses the exact same phenomena to create the exact reverse effect: by deliberately scheduling a &lt;em&gt;focus event&lt;/em&gt;, you push out other concerns from your working memory, effectively clearing space for the future use.&lt;/p&gt;

&lt;p&gt;The brain is really interesting in its ability to switch between very low level semantics, like a logical if statement, to higher level thoughts such as scheduling time to read and reflect on the philosophical works of Kant. Reading text is one of the highest faculties and something that completely separates humans from other living things with regards to intellectual capacity. Pomodoro Technique is just one example of simple techniques used for high level effect.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This post was originally published on &lt;a href="https://medium.com/@andrew_subarctic/why-does-the-pomodoro-technique-work-so-well-66190ea8ddf"&gt;medium.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;


</description>
      <category>ai</category>
      <category>productivity</category>
      <category>psychology</category>
      <category>mentalhealth</category>
    </item>
    <item>
      <title>Work Ethic and Culture</title>
      <dc:creator>Andrew Lucker</dc:creator>
      <pubDate>Mon, 18 Sep 2017 22:12:36 +0000</pubDate>
      <link>https://forem.com/andrewlucker/work-ethic-and-culture</link>
      <guid>https://forem.com/andrewlucker/work-ethic-and-culture</guid>
      <description>

&lt;p&gt;Right now there is a discussion going on in the Machine Learning community about what constitutes an abusive work environment with regards to hours logged per week. The target of the discussion is &lt;a href="https://twitter.com/betaorbust/status/908890982136942592"&gt;Andrew Ng and a recent job post&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Having worked in a variety of highly-productive work environments ranging all across the scale of abuse, I thought I would add my two cents. My opinion is simply that there are a variety of environments and you need to find those that fit you personally. That is all. It is hard to classify any singular pattern as abusive, because cultural (read ethnic) expectations are very different.&lt;/p&gt;

&lt;p&gt;I encountered my first difficult work environment while completing my undergraduate degree in Computer Science and working with a professor as a Research Assistant. When I started he told me straight away that “I don’t work hard until deadlines draw near, then I’ll expect you to keep up with me”. True to his word I worked part time for eight months learning C++ and reading networking journals. It wasn’t until about three weeks before the IEEE Infocom deadline that he started demanding more of my time. The last 4–5 days before the deadline were basically no sleep trying to finish the simulations and final draft of the paper.&lt;/p&gt;

&lt;p&gt;After finishing school I moved on to work at very lax startups that despite having no difficult time demands went on to be fairly successful. Performance is certainly not correlated to dedication.&lt;/p&gt;

&lt;p&gt;Starting with another younger company I found myself in a totally insane environment with founders who were so abusive that they would put Andrew Ng to shame. They burnt themselves out despite random trips to Vegas. I’m not sure what background created that mindset, but I left quickly.&lt;/p&gt;

&lt;p&gt;To me all I can see are the stereotypes. People from different families, countries, class status, and race, naturally have different expectations about what “hard work” means. The problem is that most communities and companies value hard work despite having conflicting definitions of that cultural trait. I come from a background of working class work-an-hour pay-an-hour ethics. Naturally for startups and research this doesn’t fit well and as such I’ve tried to adjust.&lt;/p&gt;

&lt;p&gt;Some people take positions of power and let it go to their head. I don’t think Andrew Ng is like that. If your supervisor is there beside you working the same difficulties, then they probably don’t mean to take advantage of you. Cultural expectations are wildly different around the globe, and tech spills it all together and we expect cookie-cutter management from the start. Anything to cut the stress is good in my opinion, but then again maybe my culture is wrong. I don’t know.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This post was originally published on &lt;a href="https://medium.com/@andrew_subarctic/work-ethic-and-culture-5faf07fba5fe"&gt;medium.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;


</description>
      <category>culture</category>
      <category>ai</category>
      <category>stress</category>
      <category>mentalhealth</category>
    </item>
    <item>
      <title>Why does English have such odd rules for verb conjugation</title>
      <dc:creator>Andrew Lucker</dc:creator>
      <pubDate>Tue, 29 Aug 2017 19:08:47 +0000</pubDate>
      <link>https://forem.com/andrewlucker/why-does-english-have-such-odd-rules-for-verb-conjugation</link>
      <guid>https://forem.com/andrewlucker/why-does-english-have-such-odd-rules-for-verb-conjugation</guid>
      <description>&lt;p&gt;am, is, are, was, were, weren’t, has been, have been, is being…&lt;/p&gt;

&lt;p&gt;This quirk, of having lots of one-off verb conjugations, is not unique to English. However, it is also not universal: some languages have singular conjugations that apply to all verbs in the same way.&lt;/p&gt;

&lt;p&gt;To look at the problem let’s compare the two existing options:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;English: I was at the baseball game.&amp;gt; Japanese: I (at the baseball game) (to be)-(past tense).&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Aside from rearranging the sentence a little bit, the Japanese equivalent is different in that the root verb does not change relative to the tense. The tense is described in something called the “trailing characters”. Another example can make this more clear.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;English: We were at the baseball game.&amp;gt; Japanese: We (at the baseball game) (to be)-(past tense).&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Here we conjugate the English verb differently because the subject is plural. In the Japanese example there is no change because verbs are plurality-indifferent. Not only is every verb conjugated in the same way, but also there are fewer conjugations.&lt;/p&gt;

&lt;p&gt;However, despite bringing up the odd case of Japanese, there are underlying forces on both languages that are what I would like to talk about in this post: rote memory and the related role of brevity of idioms.&lt;/p&gt;

&lt;p&gt;You may have heard that there is no &lt;em&gt;strict&lt;/em&gt; limit on long-term memory that has been observed in lab conditions. This may be due simply to the difficulty in testing for this quantity and also the large variance of each individual with regard to different subject matter.&lt;/p&gt;

&lt;p&gt;There is clearly a limit somewhere on rote memory, or we would not observe clinical deficits such as Alzheimer’s disease or Amnesia. However, in healthy individuals, there is an exceptional capacity for learning that has never been fully observed at a limit. Meanwhile, some individuals have such clear and voluminous memory that they are labeled with “photographic memory”. This special designation describes their capacity to remember most of life’s experiences as if there were a camera running in the back of their mind.&lt;/p&gt;

&lt;p&gt;With this in mind, I would like to ask the question: how does our capacity for rote memorization and deficit in processing speed combine to affect the languages that we use?&lt;/p&gt;

&lt;p&gt;I’ve written before about our brain’s slow processing speed. Humans can do basic arithmetic tasks like addition at a max speed of maybe one operation per second. Computers, for comparison, do billions of computations per second. However, this is not a fair comparison. The human visual cortex can manipulate images or 3D scenes at rates comparable to modern computers. Just think of a three-dimensional object and zoom in, zoom out, or transform it in some way. We haven’t completely lost yet in terms of processing speed, but these computations are mostly sub-conscious and lack the general utility that is a hallmark of computer speed.&lt;/p&gt;

&lt;p&gt;So in summary: we can remember extremely large quantities of experiences and derived concepts, we are very slow at conscious thought, and we have extremely fast subconscious processes. These are the main restrictions that I would like to present as the pressures that shaped English et. al.&lt;/p&gt;

&lt;p&gt;First, words must be short unless they are compound words. Our long term memory works best when there is a symbolic representation for what we are trying to recall. These “labels are our nouns, verbs, adjectives, prepositions, etc. This, strangely enough is the main reason that we conjugate verbs: because it is easier to conjugate a word in strange ways than to recall a longer but more normalized word form. This applies to all languages that I am aware of; Japanese for comparison uses shorter words to make up for the extra length of the conjugation attached.&lt;/p&gt;

&lt;p&gt;Second, grammar must be simple. I was not taught any grammar until Junior year in high school. Most languages rely on grammar being entirely intuitive, but also our brains are limited by slow processing speeds for this sort of high level thinking. So with dual forces working against it, our grammars remain simple to an extent.&lt;/p&gt;

&lt;p&gt;Third, semantics must be processed along with the grammar as words are formed. This is the strangest constraint since we do not have a clear understanding of how natural language translates into meaning (bad Siri, bad Alexa). As our tools get better, we should have a clearer understanding of how this shapes our languages. For now all we know is that it does affect language, but not any of the specific ways.&lt;/p&gt;

&lt;p&gt;So to summarize, English conjugations are weird because Human brains are barely capable of processing the languages that we create. It is actually the norm for languages to be reasonable while they are young, but when people start using them en masse they become cluttered and strange.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2Av-P3Hg0p6zr7SrD7hH9sbg.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2Av-P3Hg0p6zr7SrD7hH9sbg.jpeg"&gt;&lt;/a&gt;nazis&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This post was originally published on &lt;a href="https://medium.com/@andrew_subarctic/why-does-english-have-such-odd-rules-for-verb-conjugation-6d3a2648a4e6" rel="noopener noreferrer"&gt;medium.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>naturallanguage</category>
      <category>language</category>
      <category>computerscience</category>
      <category>linguistics</category>
    </item>
    <item>
      <title>This is what CyberWar looks like</title>
      <dc:creator>Andrew Lucker</dc:creator>
      <pubDate>Thu, 27 Jul 2017 04:18:48 +0000</pubDate>
      <link>https://forem.com/andrewlucker/this-is-what-cyberwar-looks-like</link>
      <guid>https://forem.com/andrewlucker/this-is-what-cyberwar-looks-like</guid>
      <description>&lt;p&gt;BTC-E charged with laundering funds stolen from Mt. Gox&lt;/p&gt;

&lt;p&gt;A grand jury in Northern California &lt;a href="https://www.justice.gov/usao-ndca/pr/russian-national-and-bitcoin-exchange-charged-21-count-indictment-operating-alleged"&gt;has indicted&lt;/a&gt; a Russian National, Alexander Vinnik &lt;strong&gt;,&lt;/strong&gt; after tracing ~$4 billion worth of stolen Bitcoin to his addresses. His company BTC-e is also accused of operating an unlicensed money services business. He was arrested in Greece and will be extradited to face trial in the U.S.&lt;/p&gt;

&lt;p&gt;This is one hell of a jurisdiction nightmare if I ever saw one. If this trial follows the path that it has set out so far, then the Libertarian take on the BTC currency will be all but dead. The remaining scaffolding will be public ledgers and secured transactions; privacy in industry will be all but dead.&lt;/p&gt;

&lt;p&gt;Assuming this $4 billion in BTC will be put up for FBI auction, we could see huge new investment into the crypto-currency market, or a huge market crash. One way or another, this event will mark the beginning of the end for Wild West currency exchanges. The black markets are shuttering and being replaced with box stores. Apparently the &lt;a href="https://blogs.wsj.com/cfo/2017/07/12/daimler-uses-blockchain-to-issue-bonds/"&gt;Daimler ICO&lt;/a&gt; is a thing now.&lt;/p&gt;

&lt;p&gt;What happens when big companies play with crypto-fire? We have tested this crypto-weapon against the dirtiest of the dirty black black markets. Cryptographically secure transactions, proof-of-work, and public ledgers are real and here to stay. They work best when all parties involved literally want to kill each other. What happens when this saw-toothed technology is unleashed into a Disney Land of Corporate BS?&lt;/p&gt;

&lt;p&gt;War.&lt;/p&gt;

&lt;p&gt;For the last hundred years, corporations and investment have been carefully guided and fenced by regulators with a toolbox full of simple gadgets to thwack and knock and hammer any business operation that looks more like a weed than a garden flower. Regulators just use the smell test: if it smells rotten, then it needs to change or leave.&lt;/p&gt;

&lt;p&gt;Now what happens when Distributed Autonomous Organizations start building social capital on the block chain? FileCoin for IPFS. NameCoin for DNS. All internet infrastructure can be refitted to work with literally zero operational staff. These are current technologies, sitting in the wing. Imagine if Facebook belonged to its users. Imagine if Twitter didn’t need to be profitable, just popular. These what-ifs will be the next generation of web companies. Headless. Amorphous. Unstoppable.&lt;/p&gt;

&lt;p&gt;Investors seem pretty hyped. That’s what happens when there is blood in the water.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This post was originally published on &lt;a href="https://medium.com/@andrew_subarctic/this-is-what-cyberwar-looks-like-52ad09cce95f"&gt;medium.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>blockchain</category>
      <category>dao</category>
      <category>bitcoin</category>
      <category>ipfs</category>
    </item>
    <item>
      <title>Do neural cliques have “dimensions”?</title>
      <dc:creator>Andrew Lucker</dc:creator>
      <pubDate>Fri, 21 Jul 2017 04:41:18 +0000</pubDate>
      <link>https://forem.com/andrewlucker/do-neural-cliques-have-dimensions</link>
      <guid>https://forem.com/andrewlucker/do-neural-cliques-have-dimensions</guid>
      <description>&lt;p&gt;The editorial staff at Frontiers may think so.&lt;/p&gt;

&lt;p&gt;The Blue Brain team, headed by Henry Markram, noted the appearance of groups of closely connected neurons in their digital models and simulations of rats and &lt;em&gt;C. elegans&lt;/em&gt; worms. Somewhere down the wire this story turned into headlines like “&lt;a href="https://www.sciencealert.com/new-study-discovers-your-brain-actually-works-in-up-to-11-dimensions" rel="noopener noreferrer"&gt;The Human Brain Can Create Structures in Up to 11 Dimensions&lt;/a&gt;”. If there is anything to gain from this title, then the Blue Brain team must have found a wormhole or something, right?&lt;/p&gt;

&lt;p&gt;No, this is why grammar matters. Take for example an original statement “A twelve neuron network can model objects in eleven dimensional space”. Then compare it to the new statement “A twelve neuron network occupies eleven dimensional space”. These titles now share nothing other than the subject. The meaning is completely different.&lt;/p&gt;

&lt;p&gt;As I have indicated &lt;a href="https://hackernoon.com/programming-language-oxidization-6bb76b0c9099" rel="noopener noreferrer"&gt;over&lt;/a&gt; and &lt;a href="https://hackernoon.com/translating-unix-philosophy-into-modern-environments-80d7949834f3" rel="noopener noreferrer"&gt;over&lt;/a&gt; again, language drift matters. So much so that sloppy journalism is causing a cultural rift in our society, between those that consume original documents and those who consume derivatives. Let’s call this &lt;em&gt;trickle down journalism&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Yes I understand that the economics of journalism have changed. Yes I know that academic publishing is exclusive. However, that is why I ideologically support samizdat projects like Sci-Hub. In a world where graduate level science is shoveled down our throats to justify political agendas there is no room for excuses. Publicly funded research should be available to the public, for free. Instead we have a system where putz commoners pay once, twice, and three times more for writing they will never be permitted to see.&lt;/p&gt;

&lt;p&gt;Now back to the eleventh dimension. How could we have managed this story better? First, ignore reporters unless they link to the original (openly accessible) document at the top of the page. Second, science is not boring if you explain it well, actively support and share good technical writing. Third, ignore as much BS as possible, so train your literary nose.&lt;/p&gt;

&lt;p&gt;Scientific publishing is an art form. However, the rigorous style and prose that was expected of a peer-reviewed submission is deteriorating. Now fewer people are inclined to use the indirect “we” pronoun in favor of crediting an influential coauthor. Titles are now written with click-baity consideration for skimmers on arXiv or similar repositories.&lt;/p&gt;

&lt;p&gt;Changing language to fit usage is natural, but some homage should be left to old standards. If you find yourself in the position of writing or reviewing peer-reviewed or otherwise technical writing, look for conversational tone, as this seems more and more appropriate to the current environment. This change of tone alone could help make science more approachable, assuming access is granted to the public.&lt;/p&gt;

&lt;p&gt;I believe that open-access will eventually become the standard. However, for now we still have to deal with pseudo-science in the public sphere. When science is locked away, we end up with scientific illiteracy as the consequence. Is it not more valuable to share than to hoard?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F960%2F1%2ALaOBD7KbYEO4t9muABMYYg.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F960%2F1%2ALaOBD7KbYEO4t9muABMYYg.jpeg"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This post was originally published on &lt;a href="https://hackernoon.com/do-neural-cliques-have-dimensions-5e0f12b4b" rel="noopener noreferrer"&gt;medium.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>science</category>
      <category>ai</category>
      <category>machinelearning</category>
      <category>neuralnetworks</category>
    </item>
    <item>
      <title>Open vs Closed User Interfaces</title>
      <dc:creator>Andrew Lucker</dc:creator>
      <pubDate>Wed, 19 Jul 2017 02:57:10 +0000</pubDate>
      <link>https://forem.com/andrewlucker/open-vs-closed-user-interfaces</link>
      <guid>https://forem.com/andrewlucker/open-vs-closed-user-interfaces</guid>
      <description>&lt;h3&gt;
  
  
  or Why everything looks bad on your phone
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TSOhl4GA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/480/1%2AyhpjwF8tXMxLqgHaKIEbGg.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TSOhl4GA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/480/1%2AyhpjwF8tXMxLqgHaKIEbGg.jpeg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A &lt;em&gt;responsive&lt;/em&gt; website or app understands different screen sizes and resolutions, creating a different experience for different devices.&lt;/p&gt;

&lt;p&gt;There are many reasons why an app would not have responsive design, but the one that I would like to talk about today is the Open vs Closed UI problem.&lt;/p&gt;

&lt;p&gt;Open UIs, like web standards for example, allow dynamic and relative positioning of elements, ideally mixing native components and custom content into a clean interface. This clean mixing of dynamic and native components rarely happens. The fault usually lies in the nature of the layout engine (Open), and how new components must be continually built up from the toothpicks and glue that the platform provides. In web development these toothpicks started with &lt;em&gt;table layouts&lt;/em&gt; before moving into &lt;em&gt;div/css&lt;/em&gt; muck. There is no abstraction layer above the prefab components that are provided: currently 60 odd html tags and quite a lot of css fields.&lt;/p&gt;

&lt;p&gt;Closed UIs, like native mobile apps, discourage open development in favor of customizing the native library components. This usually results in a clean but rigid design look and feel. If you want anything extra, things become much more complicated very quickly.&lt;/p&gt;

&lt;p&gt;So how can someone transition through Open and Closed platforms to create a unified and responsive design?&lt;/p&gt;

&lt;p&gt;Projects like React or Elm have tried to answer this by creating language to describe reusable semantic components. Their approaches are very different, but their goal is the same: to build great apps. This simpler said than done.&lt;/p&gt;

&lt;p&gt;Take for example the problem of sizing a movie animation in fullscreen mode across various devices. The easiest and most common way to approach this problem is to size the movie to the most constrained dimension: height or width. Laptops tend to be wider than tall, so this works well on the developer laptop at least! However, take this approach to a mobile phone and suddenly the problem appears. Phones have a concept of “orientation”. By tilting a phone horizontally or vertically, a user expects the device to reorient the content to fit the new screen dimensions. For our movie example this would mean that a user would probably want to hold the phone horizontally and thus use the entire screen to play the content. Vertically oriented movies would be too small to see, however standard web video streaming usually plays in fixed vertical mode. Neglecting to use screen orientation result in a very poor user experience.&lt;/p&gt;

&lt;p&gt;This is the core of what responsive design is. To create content that is specifically tailored to its current, dynamically changing, environment.&lt;/p&gt;

&lt;p&gt;To achieve this we need abstraction that has not been standardized or widely disseminated. This means that native code should learn a little from web, and web should learn a little from native. We need more Component libraries that bring well-tailored experiences and cross-breed those components with CSS like styling. This is the goal of the &lt;a href="https://github.com/andrew-lucker/Lattice"&gt;Lattice experimental UI framework&lt;/a&gt;. By taking inspiration from web standards and snares, then moving those concepts into a rigorous cross-platform development environment the hope is that something of a higher-order will materialize.&lt;/p&gt;

&lt;p&gt;The project will be complete when a standard and responsive experience can be achieved across mobile, web, and desktop environments. Many have tried to achieve these goals throughout the Object-Oriented renaissance. However with Rust’s safer and slightly functional we hope to overturn past failed project and create something that we can all learn from: a new toolbox for UI and UX progress.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This post was originally published on &lt;a href="https://hackernoon.com/open-vs-closed-user-interfaces-46daf6e9a1b4"&gt;medium.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>react</category>
      <category>ux</category>
      <category>design</category>
    </item>
    <item>
      <title>Language drift is real</title>
      <dc:creator>Andrew Lucker</dc:creator>
      <pubDate>Mon, 17 Jul 2017 02:30:47 +0000</pubDate>
      <link>https://forem.com/andrewlucker/language-drift-is-real</link>
      <guid>https://forem.com/andrewlucker/language-drift-is-real</guid>
      <description>&lt;p&gt;Language drift is real&lt;/p&gt;

&lt;p&gt;and it is starting to accelerate.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AzhZVosQmUM5fk9GuNSfsyg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AzhZVosQmUM5fk9GuNSfsyg.png"&gt;&lt;/a&gt;Language Map&lt;/p&gt;

&lt;p&gt;English is a child of many parents. Our language used to be so fractured geographically that a commoner would not understand another from even several miles away. This was the world of Beowulf, written sometime around 1000AD.&lt;/p&gt;

&lt;p&gt;The language at that time was commonly low-germanic and Anglo-Saxon. However, over the span of several hundred years, several forces would combine to unite these disparate dialects, culminating in the works of Shakespeare; a literary work that we can still appreciate today without much translation or side notes.&lt;/p&gt;

&lt;p&gt;The two new forces at work would be the Church and the State. The Church brought Latin to the people in the form of sermons. The State brought French to the people in the form of negotiation. Both would utterly change the structure of Old English into what we know today.&lt;/p&gt;

&lt;p&gt;Similar forces are combining today to create new language communities from within. The forces at work today are not so different from the past. The Church sermons have been replaced with Computer Science lectures. The State negotiations have been replaced with Social Networks and Voice Assistants.&lt;/p&gt;

&lt;p&gt;The pace of language transformation in Computer terms is unrelenting. As anyone in the industry will tell you, technologies age quickly. Currently we are piling up aging infrastructure at such an astounding rate that we may soon see the advent of a new kind of Computer Field: Technological Archaeology. Everything is facing in the direction of “new” and idioms, syntax, and semantics become dated over the span of four years.&lt;/p&gt;

&lt;p&gt;Human Languages thus far, discounting emojis or memes, have withstood the onslaught of Internet forces. However, I believe this will not last much longer. If Twitter or Facebook have shown us anything, it would be that humans are really bad at communication. Anything spoken from the emotional or ideological layer of speech will be lost or ignored. Emotions are too personal and Ideology is boring. At this rate our network bubbles will turn into network boundaries. Network boundaries are exactly the place along which language boundaries form. Unless we start breaking bubbles, then drift will continue to accelerate.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This post was originally published on &lt;a href="https://medium.com/@andrew_subarctic/language-drift-is-real-11eba1d74b91" rel="noopener noreferrer"&gt;medium.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>linguistics</category>
      <category>socialnetwork</category>
      <category>language</category>
      <category>mentalhealth</category>
    </item>
  </channel>
</rss>
