<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: gaborschulz</title>
    <description>The latest articles on Forem by gaborschulz (@gaborschulz).</description>
    <link>https://forem.com/gaborschulz</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/gaborschulz"/>
    <language>en</language>
    <item>
      <title>Time Series Analysis with Polars</title>
      <dc:creator>gaborschulz</dc:creator>
      <pubDate>Sun, 10 Dec 2023 14:13:04 +0000</pubDate>
      <link>https://forem.com/gaborschulz/time-series-analysis-with-polars-3dfg</link>
      <guid>https://forem.com/gaborschulz/time-series-analysis-with-polars-3dfg</guid>
      <description>&lt;p&gt;&lt;a href="https://pola.rs"&gt;Polars&lt;/a&gt; seems to be one of the most exciting developments recently in the field of data analysis. It promises to be a fast and easy to use tool that can help overcome some of the trickiest challenges analysts face when using Pandas. E.g. it can process datasets that are larger than the computer's memory by using lazy processing, and offers a friendlier learnings curve than PySpark.&lt;/p&gt;

&lt;p&gt;Pandas offers many great features to simplify the analysis of time series. In this post I'm going to try to prepare and analyze a temporal dataset to see how far I can go with Polars.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why use Polars?
&lt;/h2&gt;

&lt;p&gt;Pandas has played a major role in making the data revolution possible and bringing data scientists to Python. It's still a great library that has a lot of power, but it has been around for almost 16 years. It has seen a lot of development and it still goes a long way if you need a robust dataframe library, but it has also some shortcomings. &lt;/p&gt;

&lt;p&gt;One is related to the heritage of being built around the NumPy library, which is great for processing numerical data, but becomes an issue as soon as the data is anything else. Pandas 2.0 has started to bring in Arrow, but it's not yet the standard (you have to opt-in and according to the developers it's going to stay that way for the foreseeable future). Also, pandas's Arrow-based features are not yet entirely on par with its NumPy-based features. Polars was built around &lt;a href="https://arrow.apache.org"&gt;Arrow&lt;/a&gt; from the get go. This makes it very powerful when it comes to exchanging data with other languages and reducing the number of in-memory copying operations, thus leading to better performance.&lt;/p&gt;

&lt;p&gt;The 2nd point is that pandas is not able to use all computing power of your computer without external help. No matter how many CPU cores you've got, Pandas computation will always use only one of them. There are tools like Dask that can help you get around that limitation, but that also adds another dependency and another tool you've got to learn. Polars has the ability to use the full computing power available to you without having to do anything in addition.&lt;/p&gt;

&lt;p&gt;The 3rd point I love is the ability to use lazy APIs. These offer you the possibility to build your data pipeline first and work on datasets that are larger than your computer's memory. This is very powerful if you have to analyze huge datasets or if you are working with streaming data. It also comes with query optimization, which can be a huge performance blessing in its own right.&lt;/p&gt;

&lt;p&gt;The API is very friendly, even though if you're coming from pandas, you might have to unlearn a couple of things and learn a few new concepts, but overall, you'll find that the learning curve stays moderate. &lt;/p&gt;

&lt;p&gt;If you're curious to read more about the performance benefits, check out this &lt;a href="https://pola.rs/posts/benchmarks/"&gt;benchmark&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The dataset
&lt;/h2&gt;

&lt;p&gt;For this tutorial we're going to use the &lt;a href="https://ec.europa.eu/eurostat/databrowser/view/nrg_105m/default/table?lang=en"&gt;Supply of electricity&lt;/a&gt; dataset provided by Eurostat. This dataset contains the electricity supply of European countries per month between January 2008 and December 2019. You can download the dataset as a TSV file.&lt;/p&gt;

&lt;p&gt;The dataset offers a few interesting challenges. First, it's pivoted, i.e. the months are on the columns and non-temporal identifiers are on the rows. Second, these identifiers are contained in a single column, separated by commas. Third, the dataset is a TSV file, i.e. we'll have to use tabs instead of commas to read the dataset.&lt;/p&gt;

&lt;p&gt;We'll have to deal with missing values, data type issues, etc. as well.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are we going to do?
&lt;/h2&gt;

&lt;p&gt;We'll start by reading the dataset and checking if it's good for our purpose.&lt;/p&gt;

&lt;p&gt;As usual, it's not, so we'll do some data wrangling to bring it into a shape that makes it easier for us to find answers to our questions.&lt;/p&gt;

&lt;p&gt;Finally, we're going to ask a couple of questions and try to answer them using Polars.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are we not going to do (yet)?
&lt;/h2&gt;

&lt;p&gt;We're not going to go into visualization yet, but I'm planning to add some charts to the notebook as well at a later stage.&lt;/p&gt;

&lt;p&gt;We're not going to take advantage of the lazy API as the dataset is quite small. I'm planning to do a separate post about this topic, so stay tuned.&lt;/p&gt;

&lt;h2&gt;
  
  
  Let's see the code
&lt;/h2&gt;

&lt;p&gt;Feel free to download the dataset and the Jupyter notebook and run it in your own environment if you'd like to follow along. The notebook is available here:&lt;br&gt;
&lt;a href="https://github.com/gaborschulz/learning-polars/blob/main/01-time-series-analysis/time-series-analysis.ipynb"&gt;https://github.com/gaborschulz/learning-polars/blob/main/01-time-series-analysis/time-series-analysis.ipynb&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You'll find a detailed description of each step in the notebook. &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusions
&lt;/h2&gt;

&lt;p&gt;If you've followed the notebook, you've seen that the Polars API is quite easy to use and offers you very powerful capabilities when it comes to working with time series. Feel free to continue experimenting with the dataset and the code, and let me know if you've got questions or comments. &lt;/p&gt;

&lt;p&gt;I'm also happy to see your feedback on the topics you'd like me to write about around Polars.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;p&gt;Data source:&lt;br&gt;
&lt;a href="https://ec.europa.eu/eurostat/databrowser/view/nrg_105m/default/table?lang=en"&gt;Supply of electricity - monthly data. EuroStat Data Browser&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Polars user guide:&lt;br&gt;
&lt;a href="https://pola-rs.github.io/polars/user-guide/"&gt;User Guide&lt;/a&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>polars</category>
      <category>timeseries</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Running Python 3.12 on AWS Lambda</title>
      <dc:creator>gaborschulz</dc:creator>
      <pubDate>Sun, 11 Dec 2022 16:54:06 +0000</pubDate>
      <link>https://forem.com/gaborschulz/running-python-311-on-aws-lambda-1i7p</link>
      <guid>https://forem.com/gaborschulz/running-python-311-on-aws-lambda-1i7p</guid>
      <description>&lt;p&gt;AWS Lambda functions have completely revolutionized the way I work with (and think about) compute. It's just amazingly convenient to implement a quick function, push it to Lambda, schedule it with EventBridge and let it run for free or for almost free.&lt;/p&gt;

&lt;p&gt;However, if you're like me, you want to keep pace with the latest and greatest in the Python world, which, at the time of this writing, is Python 3.12. I find the new error messages unbelievably useful and I could hardly wait for the ability to use the same quotation mark inside the curly braces in f-strings.&lt;/p&gt;

&lt;p&gt;And, this passion to keep up with the latest and greatest interferes with using AWS Lambda, which only offers Python 3.7, 3., 3.9, 3.10 and 3.11.&lt;/p&gt;

&lt;p&gt;Luckily, there's a way around this, and it's fairly easy to implement: create your own Lambda runtime.&lt;/p&gt;

&lt;p&gt;There's ample documentation and a wide array of tutorials that show you how to do this (e.g. &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/runtimes-walkthrough.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/lambda/latest/dg/runtimes-walkthrough.html&lt;/a&gt;). However, I also like to follow the D.R.Y. (Don't Repeat Yourself) principle, so why not package it as a reusable Docker image and use it in all my Lambda apps?&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1. Setting things up
&lt;/h2&gt;

&lt;p&gt;Let's start by creating a new directory for our entire build setup:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir -p python-lambda-runtimes 
cd python-lambda-runtimes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's create a &lt;code&gt;Dockerfile&lt;/code&gt; in the directory with the following content (we're going to look at what each line does after the code).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Define custom function directory
ARG FUNCTION_DIR="/var/task/"

FROM python:3.12-slim-bookworm as build-image

# Include global arg in this stage of the build
ARG FUNCTION_DIR
RUN mkdir -p ${FUNCTION_DIR}

# Install aws-lambda-cpp build dependencies
RUN apt-get update &amp;amp;&amp;amp; \
  apt-get install -y \
  g++ \
  make \
  cmake \
  unzip \
  libcurl4-openssl-dev

# Install the function's dependencies
RUN pip install --target ${FUNCTION_DIR} awslambdaric


FROM python:3.12-slim-bookworm

# Include global arg in this stage of the build
ARG FUNCTION_DIR
# Set working directory to function root directory
WORKDIR ${FUNCTION_DIR}

# Copy in the built dependencies
COPY --from=build-image ${FUNCTION_DIR} ${FUNCTION_DIR}
ENTRYPOINT [ "/usr/local/bin/python", "-m", "awslambdaric" ]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's see what's happening here. We initiate a multi-phase build to reduce the size of our final image. On the &lt;code&gt;FROM&lt;/code&gt; lines we choose which Python version we want to use for our runtime. If you wanted to work with Python 3.11 then you could simply replace the &lt;code&gt;3.12&lt;/code&gt; part with &lt;code&gt;3.11&lt;/code&gt;. &lt;br&gt;
As a last step of the first build phase we install &lt;code&gt;awslambdaric&lt;/code&gt;, which is the &lt;a href="https://github.com/aws/aws-lambda-python-runtime-interface-client" rel="noopener noreferrer"&gt;AWS Lambda Runtime Interface Client&lt;/a&gt; which makes sure that the Lambda environment is able to communicate with our own code.&lt;br&gt;
In the 2nd stage we simply copy the runtime interface client into our final image.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 2: Create a repository in your favorite Docker Container Registry
&lt;/h2&gt;

&lt;p&gt;Since Docker has changed pricing model some time ago I started using AWS Elastic Container Registry (ECR) for my custom, private images. So, let's create a new repo. You can either use the AWS Console if you prefer the GUI, or if you like the CLI, you can simply run this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ecr create-repository --repository-name python-lambda-runtime
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will create a repository called &lt;code&gt;python-lambda-runtimes&lt;/code&gt;. Feel free to replace the name with anything you prefer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Log in to your preferred repo
&lt;/h2&gt;

&lt;p&gt;For example, if you are using ECR you can use this to log in:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ecr get-login-password --region eu-west-1 | docker login --username AWS --password-stdin &amp;lt;AWS_ACCOUNT_ID&amp;gt;.dkr.ecr.eu-west-1.amazonaws.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Please, replace  with the ID of your AWS account.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: Build and push your image
&lt;/h2&gt;

&lt;p&gt;To build your image and push it to ECR, you can do the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build -t python-lambda-runtime . &amp;amp;&amp;amp; \
docker tag python-lambda-runtime:latest &amp;lt;AWS_ACCOUNT_ID&amp;gt;.dkr.ecr.eu-west-1.amazonaws.com/python-lambda-runtime:latest &amp;amp;&amp;amp; \
docker push &amp;lt;AWS_ACCOUNT_ID&amp;gt;.dkr.ecr.eu-west-1.amazonaws.com/python-lambda-runtime:3.12
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After this, you have everything in place to start using your pre-built image for your Lambda functions. In the next section, we'll do through all the steps you need to take to use it in your SAM template.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5: Use the image in your SAM template
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Create a &lt;code&gt;Dockerfile&lt;/code&gt; in the directory of your Lambda function. Add the following content:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM &amp;lt;AWS_ACCOUNT_ID&amp;gt;.dkr.ecr.eu-west-1.amazonaws.com/python-lambda-runtime:3.12
COPY . /var/task/
RUN chmod -R 0755 .
RUN pip install -r requirements.txt
CMD ["app.lambda_handler"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Look for the &lt;code&gt;Type: AWS::Serverless::Function&lt;/code&gt; part of your &lt;code&gt;template.yaml&lt;/code&gt; file. Remove the &lt;code&gt;CodeURI&lt;/code&gt;, &lt;code&gt;Handler&lt;/code&gt;, and the &lt;code&gt;Runtime&lt;/code&gt; lines, and add &lt;code&gt;PackageType: Image&lt;/code&gt; instead. &lt;/li&gt;
&lt;li&gt;Add a new section with the same indentation as the &lt;code&gt;Type: AWS::Serverless::Function&lt;/code&gt; with the following content:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Metadata:
    DockerTag: &amp;lt;NAME OF YOUR FUNCTION&amp;gt;
    DockerContext: ./&amp;lt;DIRECTORY OF YOUR FUNCTION&amp;gt;
    Dockerfile: Dockerfile
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For example, if your function was called &lt;code&gt;TestFunction&lt;/code&gt; and lived in the directory &lt;code&gt;test_function&lt;/code&gt; of your project root, your Metadata would look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Metadata:
    DockerTag: TestFunction
    DockerContext: ./test_function
    Dockerfile: Dockerfile
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you run &lt;code&gt;sam build&lt;/code&gt;, it will create the new container image for you.&lt;br&gt;&lt;br&gt;
If you've already deployed your Lambda function before, you'll have to delete it or deploy the containerized one under a new name. Also, because your &lt;code&gt;samconfig.toml&lt;/code&gt; file already contains settings for your previous deployment, it makes sense to rename it to something else, and run &lt;code&gt;sam deploy --guided&lt;/code&gt; to re-populate it with your new settings.&lt;/p&gt;

</description>
      <category>php</category>
      <category>performance</category>
    </item>
    <item>
      <title>Building An Authorization Engine In Python...</title>
      <dc:creator>gaborschulz</dc:creator>
      <pubDate>Mon, 05 Apr 2021 08:33:18 +0000</pubDate>
      <link>https://forem.com/gaborschulz/to-allow-or-not-to-allow-that-is-the-question-1j65</link>
      <guid>https://forem.com/gaborschulz/to-allow-or-not-to-allow-that-is-the-question-1j65</guid>
      <description>&lt;p&gt;Some time ago I developed a competitive intelligence platform. Not much later the simple authorization model used in the tool turned out to be way too simple for all the demands users came up with.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where we came from
&lt;/h2&gt;

&lt;p&gt;Before the project started, we had a solution in place that was based solely on sending Excel files back and forth. Permission management was easy: you got the file, so you're authorized to view and edit it. &lt;br&gt;
These files were split by country, so it was obvious that permissions in the tool would also be based on countries. If you were assigned to a country, so you could go in to view and edit all entries in that country. &lt;br&gt;
Otherwise you couldn't even see any entries. Later, management decided that we could gain more benefits from opening up the tool a bit, so everybody should be able to see everything &lt;br&gt;
but only edit items in the countries they were assigned to. But, since there is also an option to download data in bulk, there was also some risk. &lt;br&gt;
What if someone who might be leaving the company on bad terms would go and download the entire database. So, let's limit that to only own countries. &lt;/p&gt;

&lt;p&gt;Of course, there are some key users who are authorized to edit master data in the tool. And they are the only people who can delete customer entries from the database. &lt;br&gt;
Key users were approaching me for a way to give other users the permission to delete stuff but with limitations. And, as always, there are some edge cases that need to have more &lt;br&gt;
limitations than the tool could currently offer. Or have more rights and less limitations. &lt;/p&gt;

&lt;p&gt;This whole authorization and permission thing started to get veeery complex, to say the least.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where we wanted to be
&lt;/h2&gt;

&lt;p&gt;I've always been a big fan of how AWS handles authorization to access resources. IAM policies are great: they give you a way to grant any permission on any level to any user and make sure that they &lt;br&gt;
can do everything they need to, nothing more, nothing less.&lt;/p&gt;

&lt;p&gt;Before, I'd already created a prototype that helped me understand how such a model works. Not that sophisticated, of course, but an authorization system that had at least the following properties:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;it should be based on policies written in JSON
&lt;/li&gt;
&lt;li&gt;it should be able to deny or allow any action on any resource in the platform
&lt;/li&gt;
&lt;li&gt;you should be able to grant access based on groups and individual policies and these should be effective together
&lt;/li&gt;
&lt;li&gt;it should not put too big a performance burden on the end user, i.e. they should not wait more than 10% in addition to what they would spend waiting without the authorization framework&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How we got there
&lt;/h2&gt;

&lt;p&gt;Let me give you a quick rundown of the process I used to develop a solution that ended up in the productive application. It is based on my authorization experiment I mentioned above.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: The concept of my experiment
&lt;/h3&gt;

&lt;p&gt;The concept is surprisingly simple.&lt;br&gt;&lt;br&gt;
Each user and group (country assignment) has a policy document, which is a JSON string. This policy document consists of a policy name, an action (what the users wants to do), a resource &lt;br&gt;
(what the users wants to do it with) and an effect (allow or deny). Each of these policy elements uses regex notation to allow fine tuning. At runtime, these permission regexes are compiled and cached &lt;br&gt;
so they can be reused (this helps to achieve the performance goal).&lt;br&gt;&lt;br&gt;
Each resource publishes its own resource document as a JSON-like object as well. JSON-like, because all quotation marks and white spaces after colons are stripped away, so it's not a valid JSON, &lt;br&gt;
but it's very similar.&lt;/p&gt;

&lt;p&gt;Authorization of a user on an individual item goes like this:  &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Check if there is an &lt;strong&gt;explicit DENY&lt;/strong&gt; policy for the resource and action by matching all Deny policy regexes of the user. If there is, the request is &lt;strong&gt;denied&lt;/strong&gt;.
&lt;/li&gt;
&lt;li&gt;Check if there is an &lt;strong&gt;explicit ALLOW&lt;/strong&gt; policy for the resource and action by matching all Allow policy regexes of the user. If there is, the request is &lt;strong&gt;granted&lt;/strong&gt;.
&lt;/li&gt;
&lt;li&gt;If there is &lt;strong&gt;no explicit ALLOW&lt;/strong&gt; for the resource the request is implicitly &lt;strong&gt;denied&lt;/strong&gt;.
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rBXu0hyE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://blog.gaborschulz.com/images/advanced-authorization.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rBXu0hyE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://blog.gaborschulz.com/images/advanced-authorization.png" alt="Advanced Authorization" width="800" height="410"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: The prototype
&lt;/h3&gt;

&lt;p&gt;I had created a prototype in Python for my experiment (&lt;a href="https://github.com/gaborschulz/authorization-prototype"&gt;https://github.com/gaborschulz/authorization-prototype&lt;/a&gt;) to understand how policy-based &lt;br&gt;
authorization could work in a language-agnostic framework. The resource and the policy document processing were nice and smooth, so my idea was to recreate the whole thing in C# for this project.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Putting it into action
&lt;/h3&gt;

&lt;p&gt;Since the platform is developed in C# I had to convert the prototype into C#. I've built the authorization resource object into each model and added a central method for checking authorization&lt;br&gt;
 on individual objects. I also wanted to get lists filtered by authorizations. &lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Learnings
&lt;/h3&gt;

&lt;p&gt;The solution has proven to be amazingly effective as we are now able to write policies with almost any desired effect. We can hide items generally, from certain users only, hide only their details, etc. &lt;br&gt;
The full power of regular expressions is making this tool very flexible and still fairly easy to use.&lt;br&gt;&lt;br&gt;
It uses only standard library stuff that is available in almost any programming language.&lt;br&gt;&lt;br&gt;
It's very important to use proper caching of the compiled regex statements to maintain performance. &lt;br&gt;
It's enough to purge the user's authorization from the cache whenever a user policy gets changed or a predefined amount of time has passed.&lt;/p&gt;

</description>
      <category>python</category>
      <category>authorization</category>
      <category>json</category>
    </item>
  </channel>
</rss>
