this post was submitted on 14 Mar 2024
54 points (100.0% liked)

Programming

423 readers
1 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 1 year ago
MODERATORS
 

Hey there!

I'm a chemical physicist who has been using python (as well as matlab and R) for a lot of different tasks over the last ~10 years, mostly for data analysis but also to automate certain tasks. I am almost completely self-taught, and though I have gotten help and tips from professors throughout the completion of my degrees, I have never really been educated in best practices when it comes to coding.

I have some friends who work as developers but have a similar academic background as I do, and through them I have become painfully aware of how bad my code is. When I write code, it simply needs to do the thing, conventions be damned. I do try to read up on the "right" way to do things, but the holes in my knowledge become pretty apparent pretty quickly.

For example, I have never written a class and I wouldn't know why or where to start (something to do with the init method, right?). I mostly just write functions and scripts that perform the tasks that I need, plus some work with jupyter notebooks from time to time. I only recently got started with git and uploading my projects to github, just as a way to try to teach myself the workflow.

So, I would like to learn to be better. Can anyone recommend good resources for learning programming, but perhaps that are aimed at people who already know a language? It'd be nice to find a guide that assumes you already know more than a beginner. Any help would be appreciated.

top 41 comments
sorted by: hot top controversial new old
[–] robinm@programming.dev 18 points 8 months ago

Read your own code that you wrote a month ago. For every wtf moment, try to rewrite it in a clearer way. With time you will internalize what is or is not a good idea. Usually this means naming your constants, moving code inside function to have a friendly name that explain what this code does, or moving code out of a function because the abstraction you choose was not a good one. Since you have 10 years of experience it's highly possible that you already do that, so just continue :)

If you are motivated I would advice to take a look to Rust. The goal is not really to be able to use it (even if it's nice to be able able to write fast code to speed up your python), but the Rust compiler is like a very exigeant teacher that will not forgive any mistakes while explaining why it's not a good idea to do that and what you should do instead. The quality of the errors are crutial, this is what will help you to undertand and improve over time. So consider Rust as an exercice to become a better python programmer. So whatever you try to do in Rust, try to understand how it applies to python. There are many tutorials online. The official book is a good start. And in general learning new languages with a very different paradigm is the best way to improve since it will help you to see stuff from a new angle.

[–] MajorHavoc@programming.dev 12 points 8 months ago (2 children)

The O'Reilly "In a Nutshell" and "Pocket Guide to" books are great for folks who can already code, and want to pick up a related tool or a new language.

The Pocket Guide to Git is an obvious choice in your situation, if you don't already have it.

As others have mentioned, you're allowed to ignore the team stuff. In git this means you have my permission to commit directly to the 'main' branch, particularly while you're learning.

Lessons that I've learned the hard way, that apply for someone scripting alone:

  • git will save your ass. Get in the habit of using if for everything ASAP, and it'll be there when you need it
  • find that one friend who waxes poetic about git, and keep them close. Usually listening politely to them wax poetically about git will do the trick. Five minutes of their time can be a real life saver later. As that friend, I know when you're using me for my git-fu, and I don't mind. It's hard for me to make friends, perhaps because I constantly wax poetically about git.
  • every code swan starts as an ugly duck that got the job done.
  • print(f"debug: {what_the_fuck_is_this}") is a valid pattern that seasoned professionals still turn to. If you're in a code environment that doesn't support it, then it's a bad code environment.
  • one peer who reads your code regularly will make you a minimum of 5x more effective. It's awkward as hell to get started, but incredibly worth it. Obviously, you traditionally should return the favor, even though you won't feel qualified. They don't really feel qualified either, so it works out. (Soure: I advise real scientists about their code all the time. It's still wild to me that they, as actual scientists, listen to me - even after I see how much benefit I provide.)
[–] IonicFrog@lemmy.sdf.org 3 points 8 months ago (1 children)

print(f"debug: {what_the_fuck_is_this}") is a valid pattern that seasoned professionals still turn to. If you’re in a code environment that doesn’t support it, then it’s a bad code environment.

I've been known to print things to the console during development, but it's like eating junk food. It's better to get in the habit of using a logging framework. Insufficient logging has been in the OWASP Top 10 for a while so you should be logging anyway. Why not logger.debug("{what_the_fuck_is_this}") or get fancy with some different frameworks and logger.log(SUPER_LOW_LVL, "{really_what_the_fuck_is_this}")

You also get the bonus of not going back and cleaning up all the print statements afterward. All you have to do is set the running log level to INFO or something to turn all that off. There was a reason you needed to see that stuff in the first place. If you ever need to see all that stuff again the change the log level to whatever grain you need it.

[–] MajorHavoc@programming.dev 3 points 8 months ago* (last edited 8 months ago)

Absolutely true.

And you make a great point that: print(f"debug: {what_the_fuck_is_this}") should absolutely be maturing into logger.log(SUPER_LOW_LVL, "{really_what_the_fuck_is_this}")

Unfortunately I have found that when print("debug") isn't working, usually logging isn't setup correctly either.

In a solidly built system, a garbage print line will hit the logs and raise several alerts because it's poorly formatted - making it easy for the developer to find.

Sadly, I often see the logging setup so that poorly formatted logs go nowhere, rather than raising alerts until they're fixed. This inevitably leads to both debug logs being lost and critical but slightly misformatted logs being lost.

Your point is particularly valuable when it's time to get the system fixed, because it's easier to say "logging needs to work" than "fix my stupid printf", even though they're roughly equivalent.

Edit: And getting back to the scripting scientist context, scripting scientists still have my formal official permission to just say "just make my print('debug') work".

[–] rolaulten@startrek.website 3 points 8 months ago (1 children)

Along a similar vain to making a git friend, buy your sysadmins/ops people a box of doughnuts once in a while. They (generally) all code and will have some knowledge of what you are working on.

[–] MajorHavoc@programming.dev 1 points 8 months ago

That is great advice that has served me well, as well!

[–] MxM111@kbin.social 12 points 8 months ago (1 children)

As one physicist to another, the most important thing in the code are long variable names (descriptive) and comments.

We usually do not do multi-people multi year projects, so all other comments in this page especially the ones coming from programmers are not that relevant. Classes are cool, but they are not needed and often obscure clarity of algorithmic/functional programming.

S. Wolfram (creator of Mathematica) said something along these lines (paraphrasing) if you are writing real code in Mathematica - you are doing something wrong.

[–] UFODivebomb@programming.dev 1 points 8 months ago (1 children)

Great potatoes... This is not very good advice. Ok for prototypes that are intended to be discarded shortly after writing. Nothing more.

[–] Turun@feddit.de 2 points 8 months ago (1 children)

Yes, those prototypes are the goal here.

[–] UFODivebomb@programming.dev 1 points 8 months ago

Cool! Have fun! I wouldn't worry about a lot of code quality opinions then. Especially if somebody is looking at prototypes and thinking they are not prototypes haha

[–] demesisx@infosec.pub 9 points 8 months ago* (last edited 8 months ago)

Learn Haskell.

Since it is a research language, it is packed with academically-rigorous implementations of advanced features (currying, lambda expressions, pattern matching, list comprehension, type classes/type polymorphism, monads, laziness, strong typing, algebraic data types, parser combinators that allow you to implement a DSL in 20 lines, making illegal states unrepresentable, etc) that eventually make their way into other languages. It will force you to learn some of the more advanced concepts in programming while also giving you a new perspective that will improve your code in any language you might use.

I was big into embedded C programming years back ... and when I got to the pointers part, I couldn't figure out why I suddenly felt unsatisfied and that I was somehow doing something wrong. That instinct ended up being at least partially correct. I sensed that I was doing something unsafe (which forced me to be very careful around footguns like pointers, dedicating extra mental processes to keep track of those inherently unsafe solutions) and I wished there was some more elegant way around unsafe actions like that (or at least some language provided way of making sure those unintended side effects could be enforced by the compiler, which would prevent these kinds of bugs from getting into my code).

Years later, after not enjoying JS, TS (IMO, a porous condom over the tip of JavaScript), Swift, Python, and others, my journey brought me to FRP which eventually brought me to FP and with it, Haskell, Purescript, Rust, and Nix. I now regularly feel the same satisfaction using those languages that I felt when solving a math problem correctly. Refactoring is a pleasure with strictly typed languages like that because the compiler catches almost everything before it will even let you compile.

[–] Diplomjodler@feddit.de 8 points 8 months ago (1 children)

Forget everything you hear about OOP and just view it as a way to improve code readability. Just rewrite something convoluted with a class and you'll se what they're good for. Once you've got over the mental blockade, it'll all make more sense.

[–] WolfLink@lemmy.ml 4 points 8 months ago (1 children)

To add to this, there are kinda two main use cases for OOP. One is simply organizing your code by having a bunch of operations that could be performed on the same data be expressed as an object with different functions you could apply.

The other use case is when you have two different data types where it makes sense to perform the same operation but with slight differences in behavior.

For example, if you have a “real number” data type and a “complex number” data type, you could write classes for these data types that support basic arithmetic operations defined by a “numeric” superclass, and then write a matrix class that works for either data type automatically.

[–] ALostInquirer@lemm.ee 2 points 8 months ago (2 children)

One is simply organizing your code by having a bunch of operations that could be performed on the same data be expressed as an object with different functions you could apply.

Not OP, but also interested in wrapping my head around OOP and I still struggle with this in a few different respects. If what I'm writing isn't a full program, but more like a few functions to process data, is there still a use case for writing it in an OOP style? Say I'm doing what you describe, operating on the same data with different functions, if written properly couldn't a program do this even without a class structure to it? 🤔

Perhaps it's inelegant and terrible in the long term, but if it serves a brief purpose, is it more in the case of long term use that it reveals its greater utility?

[–] WolfLink@lemmy.ml 2 points 8 months ago* (last edited 8 months ago) (1 children)

Say I'm doing what you describe, operating on the same data with different functions, if written properly couldn't a program do this even without a class structure to it? 🤔

Yeah thats kinda where the first object oriented programming came from. In C (which doesn’t have classes) you define a struct (an arrangement of data in memory, kinda like a named tuple in Python), and then you write functions to manipulate those structs.

For example, multiplying two complex vectors might look like:

ComplexVectorMultiply(myVectorA, myVectorB, &myOutputVector, length);

Programmers decided it would be a lot more readable if you could write code that looked like:

myOutputVector = myVectorA.multiply(myVectorB);

Or even just;

myOutputVector = myVectorA * myVectorB;

(This last iteration is an example of “operator overloading”).

So yes, you can work entirely without classes, and that’s kinda how classes work under the hood. Fundamentally object oriented programming is just an organizational tool to help you write more readable and more concise code.

[–] ALostInquirer@lemm.ee 1 points 8 months ago

Thanks for elaborating! I'm pretty sure I've written some variations of the first form you mention in my learning projects, or broken them up in some other ways to ease myself into it, which is why I was asking as I did.

[–] Turun@feddit.de 2 points 8 months ago* (last edited 8 months ago)

I use classes to group data together. E.g.

@dataclass.dataclass
class Measurement:
    temperature: int
    voltage: numpy.ndarray
    current: numpy.ndarray
    another_parameter: bool
    
    def resistance(self) -> float:
        ...

measurements = parse_measurements()
measurements = [m for m in measurements if m.another_parameter]
plt.plot(
    [m.temperature for m in measurements], 
    [m.resistance() for m in measurements]
)

This is much nicer to handle than three different lists of temperature, voltage and current. And then a fourth list of resistances. And another list for another_parameter. Especially if you have more parameters to each measurement and need to group measurements by these parameters.

[–] ericjmorey@programming.dev 7 points 8 months ago

Do you want to work as a developer? Or do you want to want to continue with your research and analysis? If you're only writing code for your own purposes, I don't know why it matters if it's conventional.

[–] catacomb 5 points 8 months ago

If you don't already, use version control (git or otherwise) and try to write useful messages for yourself. 99% of the time, you won't need them, but you'll be thankful that 1% of the time. I've seen database engineers hack something together without version control and, honestly, they'd have looked far more professional if we could see recent changes when something goes wrong. It's also great to be able to revert back to a known good state.

Also, consider writing unit tests to prove your code does what you think it does. This is sometimes more useful for code you'll use over and over, but you might find it helpful in complicated sections where your understanding isn't great. Does the function output what it should or not? Start from some trivial cases and go from there.

Lastly, what's the nature of the code? As a developer, I have to live with my decisions for years (unless I switch jobs.) I need it to be maintainable and reusable. I also need to demonstrate this consideration to colleagues. That makes classes and modules extremely useful. If you're frequently writing throwaway code for one-off analyses, those concepts might not be useful for you at all. I'd then focus more on correctness (tests) and efficiency. You might find your analyses can be performed far quicker if you have good knowledge about data structures and algorithms and apply them well. I've personally reworked code written by coworkers to be 10x more efficient with clever usage of data structures. It might be a better use of your time than learning abstractions we use for large, long-term applications.

[–] vahtos@programming.dev 4 points 8 months ago (1 children)

This is only tangentially related to improving your code directly as you have asked. However, in a similar vein as using source control (git), when using Python learn to manage your environments. Venv, poetry, conda/mamba, etc are tools to look into.

I used to work with mostly scientists, and a good number of them knew some Python, but none of them knew how to properly manage their environments and it was a huge problem. They would often come to me and say "I ran this script a week ago and it worked, I tried it today without making any changes and it's throwing this error now that I don't understand." Every time it was because they accidentally changed their dependencies, using their global python install. It also made it a nightmare to try to revive old code for them, since there was almost no way to know what version of various libraries were used.

[–] ericjmorey@programming.dev 2 points 8 months ago* (last edited 8 months ago)

This is huge. Unfortunately, as you indicated, there's no standard tool for this and new ones are being added to the mix. Many in the science feilds are pushed towards Conda but I'm not sure it's the best option. However, Conda will be infinitely better than not using anything to manage environments and dependencies.

[–] UFODivebomb@programming.dev 4 points 8 months ago

My advice comes from being a developer, and tech lead, who has brought a lot of code from scientists to production.

The best path for a company is often: do not use the code the scientist wrote and instead have a different team rewrite the system for production. I've seen plenty of projects fail, hard, because some scientist thought their research code is production level. There is a large gap between research code and production. Anybody who claims otherwise is naive.

This is entirely fine! Even better than attempting to build production quality code from the start. Really! Research is solving a decision problem. That answer is important; less so the code.

However, science is science. Being able to reproduce the results the research produced is essential. So there is the standard requirement of documenting the procedure used (which includes the code!) sufficiently to be reproduced. The best part is the reproduction not only confirms the science but produces a production system at the same time! Awws yea. Science!

I've seen several projects fail when scientists attempt to be production developers without proper training and skills. This is bad for the team, product, and company.

(Tho typically those "scientists" fail to at building reproducible systems. So are they actually scientists? I've encountered plenty of phds in name only. )

So, what are your goals? To build production systems? Then those skills will have to be learned. That likely includes OO. Version control. Structural and behavioral patterns.

Not necessary to learn if that isn't your goal! Just keep in mind that if a resilient production system is the goal, well, research code is like the first pancake in a batch. Verify, taste, but don't serve it to customers.

[–] heeplr@feddit.de 3 points 8 months ago (1 children)

It's always good to learn new stuff but in terms of productivity: Don't attempt to be a programmer. Rather attempt to write better research code (clean up code, revision control, better commenting, maybe testing...)

Rather try to improve cooperation with programmers, if necessary. Close cooperation, asking stupid questions instead of making assumptions etc. makes the process easy for both of you.

Also don't be afraid to consult different programmers since beyond a certain level, experience and expertise in programming is vastly fragmented.

Experienced programmers mostly suck on your field and vice versa and that's a good thing.

[–] QuadriLiteral@programming.dev 1 points 8 months ago (1 children)

Odd take imo. OP is a programmer, albeit perhaps not a very good one. Did a PhD (computational astrophysics), been working as a professional dev for 10 years after that. Imo a good programmer writes code that solves the problem at hand, I don't see that much of a difference between the problem being scientific or a backend service. It doesn't mean "write lots of boilerplate-y factories, interfaces and other layers" to me, neither in research nor outside of it.

That being said, there is so much time lost in research institutes because of shoddy programming by researchers, or simply ignorance, not knowing a debugger exists for instance. OP wanting to level up their game would almost certainly result in getting to research results faster, + they may be able to help their peers become better as well.

[–] heeplr@feddit.de 1 points 8 months ago (1 children)

25 years in the industry here. As I said there's nothing against learning something new but I doubt it's as easy as "leveling up".

Both fields profit a lot from experience and it's as much gain for a scientist do become a software dev as an architect becoming a carpenter. It's simply not productive.

there is so much time lost in research institutes because of shoddy programming

Well, that's the way it is. Scientific code and production code have different requirements. To me that sounds like "that machine prototype is inefficient - just skip the prototype next time and build the real thing right away."

[–] QuadriLiteral@programming.dev 1 points 7 months ago

To me that sounds like “that machine prototype is inefficient - just skip the prototype next time and build the real thing right away.”

I don't think you understand my point, which is that developing the prototype takes e.g. 50% more time than it should because of complete lack of understanding of software development.

[–] wathek@discuss.online 3 points 8 months ago

There's a certain amount of fundamentals you need, after that point it's quite easy to hop languages by just looking over the documentation of that language. If you skip those fundamentals, you end up with a bunch of knowledge but don't realize you could do things way more effectively.

My recommendation: check out free resources for beginners and skip the atuff you already know thoroughly, focusing only on the stuff you don't know.

source: I'm self-taught and had to go through this process myself.

[–] MxM111@kbin.social 3 points 8 months ago

As one physicist to another, the most important thing in the code are long variable names (descriptive) and comments.

We usually do not do multi-people multi year projects, so all other comments in this page especially the ones coming from programmers are not that relevant. Classes are cool, but they are not needed and often obscure clarity of algorithmic/functional programming.

S. Wolfram (creator of Mathematica) said something along these lines (paraphrasing) if you are writing real code in Mathematica - you are doing something wrong.

[–] wargreymon2023@sopuli.xyz 3 points 8 months ago* (last edited 8 months ago)

Think two things:

  1. optimize the control flow of your code

  2. make it easy to read

You should also be disciplined with these two ideas, your code will look better as you become more experienced, 100% guaranteed.

[–] xilliah 3 points 8 months ago

I've got two tips to add to the pile you've already read.

I recommend you read the manuals related to what you are using. Have you read the python manual? And the ones for the libraries you use? If you do you'll definitely find something very useful that you didn't know about.

That and, reread your code. Over and over until it makes total sense, and only run it then. It might seem slow, and it'll require patience at first. Running and testing it will always be slower and is generally only useful when testing out the concept you had in mind. But as long as you're doing your conceptual work right, this shouldn't happen often. And so, most work will be spent trying to track down bugs in the implementation of the concept in the code. Trust me when you read your code rigorously you'll immediately find issues. In some cases use temporary prints. Oh and avoid the debugger.

[–] eveninghere 2 points 8 months ago* (last edited 8 months ago)

Computer scientist here. First, let me dare ask scientists here a question from a friendly fellow: do you have reference to your suggestions?

Code Complete 2 is a book on software engineering with plenty of proper references. Software engineering is important because you learn how to work efficiently. I have been involved in plenty of bad science code projects that wasted tax payers money because of the naivety by the programmers and team management.

The book explains how and why software construction can become expensive and what do about it, covering a vast range of topics agreed by industrial and academic experts.

One caveat, however, is that theories are theories. Even best practices are theories. Often, a young programmer tries to force some practice without checking the reality. You know you can reuse your function to reduce chance of bugs and save time. But have you tested if that is really the case? Nobody can tell unless you test, or ask your member if that's a good idea. I've spent a good chunk of time on refactoring that didn't matter. Yet, some mattered.

That importance of reality check is emphasized in the book Software Architecture: The Hard Parts, for example.

Now, classes, or OOP, have been led by the industry to solve their problems. Often, like in case of Java, it was a partly a solution for a large team. For them it was important to collaborate while reducing the chance of shooting someone accidentally. So, for a scientific project OPP is sometimes irrelevant, and sometimes relevant. Code size is one factor to determine the effectiveness of OOP, but other factors also exist.

Python uses OOP for providing flexibility (here I actually mean polymorphism to be precise), and sometimes it becomes necessary to use this pattern as some packages rely on it.

One problem with Python's OPP is that it inherits implementation. Recent languages seem to avoid this particular type of OOP because the major rival in OOP, what is called composition, has been time-proven to be easier to predict the program's behavior.

To me, writing Python is also often easier with OOP. One popular alternative to OOP is what is called a functional approach, but that is unfortunately not well-supported in Python.

Finally, Automate the Boring Stuff With Python is a great resource on doing routine tasks quickly. Also, pick some Pandas book and get used to its APIs because it improves productivity to a great extent. (I could even cite an article on this! But I don't have the reference at hand.)

Oh, don't forget ChatGPT and Gemini.

[–] Andy@programming.dev 2 points 8 months ago

Two books that may be helpful:

  • Fluent Python by Luciano Ramalho
  • Python Distilled by David M. Beazley

I'm more familiar with the former, and think it's very good, but it may not give you the basic introduction to object oriented programming (classes and all that) you're looking for; the latter should.

[–] Ephera@lemmy.ml 2 points 8 months ago

Could be good to try to 'reset' your brain, by learning an entirely new programming language. Ideally, a statically typed, strict language like Rust or Java, or Scala, if you happen to have a use for it in data processing. They'll partially force you to do it the proper way, which can be eye-opening and will translate backwards to Python et al.
Just in general, getting presented the condensate of a different approach to programming, by learning a new language, can teach a lot about programming, even if you're never going back to that language.

For learning more about Git, I can recommend Oh My Git!. It takes a few hours to get through. In my experience, it's really useful to have at least seen all the tools Git provides, because if something goes sideways, you can remedy it with that.

[–] Fal@yiffit.net 2 points 8 months ago

Use an IDE if you aren't already. Jetbrains stuff is great. Having autocomplete is invaluable.

[–] Turun@feddit.de 1 points 8 months ago* (last edited 8 months ago) (1 children)

As a researcher: all the professional software engineers here have no idea about the requirements for code in a research setting.

I recommend you use

  • git. It's nice to be able to revert changes without worry.
  • descriptive variable names. The meaning of descriptive is highly dependent on your situation. Single letters can have an obvious meaning, but err on the side of longer names if you're unsure. The goal is to be able to look at a variable and instantly know what it represents.
  • virtual environments and requirements.txt. when you have your code working you should have pip (or anaconda or whatever) take a snapshot of your current python installation. Then you can install the exact same requirements when you want to revive your code a few months or years down the line. I didn't do that and it's kinda biting me in the ass right now.
[–] QuadriLiteral@programming.dev 1 points 8 months ago (1 children)

As a researcher: all the professional software engineers here have no idea about the requirements for code in a research setting.

As someone with extensive experience in both: my first requirement would be readability. Single python file? Fine with that. 1k+ lines single python file without functions or other means of structuring the code: please no.

The nice thing about python is that your IDE let's you jump into the code of the libraries you're using, I find that to be a good way to look at how experienced python devs write code.

[–] Turun@feddit.de 2 points 8 months ago (1 children)

You can jump to definition in any language. In fact, python may be one of the worst ones, because compiled libraries are so common. "Real signature unknown" is all you will get some times. E.g. Numpy is implemented in C not python.

[–] QuadriLiteral@programming.dev 1 points 8 months ago (1 children)

My point about the jumping into was that you can immediately start reading the sources. Most alternative languages are compiled in some form or other so all you'll see is an API, not the implementation.

[–] Turun@feddit.de 1 points 8 months ago* (last edited 8 months ago) (1 children)

My comment was not asking for clarification, I am contradicting your claim.

Granted, my experience is mostly limited to python and rust. But I find that in python you reach the end of "jump to definition" much much sooner. Fundamental core libraries of Python are written in C, simply because the performance required cannot be reached with python alone. So after jumping two levels you are through the thin wrapper type and your compiler will give you an "I don't know, it's byte code".
In Rust I have yet to encounter this. Byte code is rarely used as a dependency, because compiling whatever is needed is no issue - you're compiling anyway - and actually can allow a few more optimizations to be performed.

Edit: since wasm is not yet wide spread, JavaScript may be the best language to dig deep into libraries.

[–] QuadriLiteral@programming.dev 1 points 7 months ago (1 children)

Mostly ML or data processing libraries I would assume, I've read tons of REST server and ORM python code for instance, none of that is written in C.

Wrt rust: no experience with that. I do do a lot of C++, there you quickly reach the end as typically you're consuming quite a bit of libraries but the complete sources of those aren't part of what is parsed by the IDE as keeping all that in memory would be unworkable.

[–] Turun@feddit.de 1 points 7 months ago

REST server and ORM python code

Fair enough, that can be achieved with pure python.