Contents

Advanced Python You’ll Actually Use

List comprehensions, generators, and a little bisect magic

Python is often described as “easy to learn, hard to master”—but the truth is, it’s easy to write and even easier to improve once you learn a few advanced idioms. These small tools make your code more expressive, efficient, and closer to how Python was meant to be used.

And under the hood? Much of this runs at C speed, thanks to Python’s standard library doing the heavy lifting for you.

List Comprehensions

Python’s list comprehensions are the flex you didn’t know you needed. They let you build lists with clarity, elegance, and just a hint of style. And they’re faster than a for loop due to python using C under the covers.

This simple list comprehension creates a new list of squares from 0 to 9:

[x * x for x in range(10)]
# [0, 1, 4, 9, 16, 25, 36, 49, 64, 81]

Now let’s add some logic. Let’s say you only want even numbers. Just tack on a condition at the end to filter with panache.

[x * x for x in range(10) if not x % 2]
# [0, 4, 16, 36, 64]

But list comprehensions aren’t just about filtering. You can also transform your data on the fly.

["odd" if x%2 else "even" for x in range(10)]
# ['even', 'odd', 'even', 'odd', 'even', 'odd', 'even', 'odd', 'even', 'odd']

Minimal typing. Maximum vibes.

Generators

Let’s say you’re working with millions of datapoints, and you don’t want to keep the entire list in memory. We can iterate through them lazily with a generator expression:

Here’s a simple generator expression that returns squares of even numbers, without building the entire list in memory:

(x * x for x in range(1000000) if not x % 2)

It looks like a list comprehension, but with parentheses instead of brackets. Subtle. Efficient. Sophisticated.

You can even define your own generators.

Say you’re pulling data from a database—thousands, or even millions, of records. You don’t want to load it all into memory, so you stream it in chunks using a custom generator.

def chunk_reader(session, chunk_size):
    offset = 0
    while True:
        rows = session.execute(
            f"SELECT * FROM USER {chunk_size} OFFSET {offset}"
        ).fetchall()

        if not rows:
            break

        yield rows
        offset += chunk_size


for chunk in chunk_reader(session, 100):
    print(chunk)

Each iteration of the for loop will grab 100 users from the database, without storing the entire dataset in-memory.

Generators are memory-efficient. With yield, each chunk is returned on demand. Ideal for streaming data or working with large files.

Dictionary Comprehensions

Same idea as list comprehensions, but for dictionaries:

{chr(i): i for i in range(65, 91)}

This example builds a simple ASCII map. It’s concise, readable, and fast—because again, this is all backed by C implementations in CPython.

enumerate()

If you’re still doing this:

for i in range(len(data)): 
    print(i, data[i])

That’s fine. But you can also do this:

for i, item in enumerate(data): 
    print(i, item)

It’s clearer, avoids off-by-one errors, and just feels more natural in Python.

Memoization with @lru_cache

If you’re calling a slow function repeatedly with the same arguments, memoization can dramatically affect performance. Python’s lru_cache makes it effortless:

Take fibonacci for instance:

# Slow
def fib(n):
    if n < 2:
        return n
    return fib(n - 1) + fib(n - 2)

fib(30)

To calculate fib(30), you must calculate fib(29) and fib(28). But then to calculate fib(29), you must call fib(28) again.

I’m sure you can see how this can be a problem.

Without memoization, this function has exponential time complexity (O(2ⁿ)), because it recalculates the same values repeatedly.

With lru_cache, the complexity drops to linear O(n), since each value is computed once and then reused.

from functools import lru_cache

@lru_cache(None)
def fib(n):
    if n < 2:
        return n
    return fib(n - 1) + fib(n - 2)

fib(30)

Behind the scenes, lru_cache stores previous results in memory—so repeated calls are instant. Great for recursive functions, database lookups, or anything deterministic and expensive.

Note: This only works when function parameters are hashable, immutable arguments (e.g., ints, strings, tuples—not lists or dicts).

Merging With zip()

When you need to loop over two (or more) lists in parallel, you can use zip() to combine them:

names = ["Gustav", "Mael", "Lune"]
scores = [92, 99, 78]

for name, score in zip(names, scores):
    print(f"{name} scored {score}")
# ("Gustav",92)
# ("Mael",99)
# ("Lune",78)

Under the hood, zip() will combine these two lists together into a series of tuples.

This avoids the range(len(...)) pattern and makes your code more pythonic.

Bonus: zip() stops at the shortest list, which can help prevent index errors.

Binary Search with bisect_left

Managing a sorted list and want to insert while keeping order?

from bisect import bisect_left

nums = [10, 20, 30, 40]
pos = bisect_left(nums, 25)  # returns 2
nums.insert(pos, 25)
# Result: [10, 20, 25, 30, 40]

bisect_left performs a binary search in O(log n) time. That performance boost? Courtesy of the C implementation under the hood.

any() and all()

Great for validation logic:

any(x > 5 for x in my_list)

→ True if any values match

all(x > 0 for x in my_list)

→ True if all values match

Short, readable, and efficient. Use with generator expressions for best performance.

Final Thoughts

These patterns won’t make your code clever—they’ll make it clean. Python was designed to read like pseudocode, but that doesn’t mean it can’t run fast. Behind many of these tools is a tight C implementation doing the hard work for you.

Small changes. Big gains.

Happy coding.