diff --git a/docs/newsletter/2024.md b/docs/newsletter/2024.md new file mode 100644 index 00000000000..e644ac68c28 --- /dev/null +++ b/docs/newsletter/2024.md @@ -0,0 +1,10461 @@ +# [Activism](activism.md) + +* New: Introduction to activism. + + [Activism](https://en.wikipedia.org/wiki/Activism) consists of efforts to promote, impede, direct or intervene in social, political, economic or environmental reform with the desire to make changes in society toward a perceived greater good. + +* New: Recomendar el episodio de podcast el diario de Jadiya. + + [Diario de Jadiya](https://deesonosehabla.com/episodios/episodio-2-jadiya/) ([link al archivo](https://dts.podtrac.com/redirect.mp3/dovetail.prxu.org/302/7fa33dd2-3f29-48f5-ad96-f6874909d9fb/Master_ep.2_Jadiya.mp3)): es algo que todo xenófobo debería de escuchar, es un diario de una chavala saharaui que formó parte del programa de veranos en familias españolas. + +* New: [Add Rafeef Ziadah pro-palestine poem.](anticolonialism.md#poems) + + [Rafeef Ziadah - "Nosotros enseñamos vida, señor"](https://www.youtube.com/watch?v=neYO0kJ-6XQ) + +* New: Añadir el vídeo del racismo no se sostiene. + + [El racismo no se sostiene](https://youtube.com/shorts/5Y7novO2t_c?si=dqMGW4ALFLoXZiw3) + +* New: [Nuevo artículo contra el turismo.](antitourism.md#artículos) + + - [Abolir el turismo - Escuela de las periferias](https://www.elsaltodiario.com/turismo/abolir-turismo): Lleguemos a donde lleguemos, no puede ser que sea más fácil imaginar el fin del capitalismo que el fin del turismo. + +## [Antifascism](antifascism.md) + +* New: [Animaros a colaborar con el Crowdfunding para los 6 de Zaragoza.](antifascism.md#campañas) + + [Crowdfunding para la libertad de los 6 de Zaragoza](https://www.goteo.org/project/libertad-6-de-zaragoza) + +## Hacktivism + +### [Chaos Communication Congress](ccc.md) + +* New: Introduce the CCC. + + [Chaos Communication Congress](https://events.ccc.de/en/) is the best gathering of hacktivism in europe. + + **Prepare yourself for the congress** + + You can follow [MacLemon's checklist](https://github.com/MacLemon/CongressChecklist) + + **[Install the useful apps](https://events.ccc.de/congress/2024/hub/de/wiki/apps/)** + + *The schedule app* + + You can use either the Fahrplan app or giggity, I've been using the second for a while, so is the one I use + + *The navigation app* + + `c3nav` is an application to get around the congress. The official F-droid is outdated, so add [their repository](https://f-droid.c3nav.de/fdroid/repo/?fingerprint=C1EC2D062F67A43F87CCF95B8096630285E1B2577DC803A0826539DF6FB4C95D) to get the latest version. + + **Reference** + - [Home](https://events.ccc.de/en/ + - [Engelsystem](https://engel.events.ccc.de/) + - [Angel FAQ](https://engel.events.ccc.de/faq) + +* New: [Introduce the Angel's system.](ccc.md#angel's-system) + + [Angels](https://angelguide.c3heaven.de/) are participants who volunteer to make the event happen. They are neither getting paid for their work nor do they get free admission. + + **[Expectation](https://angelguide.c3heaven.de/#_expectations)** + + Helping at our events also comes with some simple, but important expectations of you: + + - Be on time for your shift or give Heaven early notice. + - Be well rested, sober and not hungry. + - Be open-minded and friendly in attitude. + - Live our moral values: + - Be excellent to each other. + - All creatures are welcome. + + **[Quickstart](https://angelguide.c3heaven.de/#_quick_start)** + + - Create yourself an [Engelsystem account](https://engel.events.ccc.de/) + - Arrive at the venue + - Find [the Heaven](https://c3nav.de/) and go there. + - Talk to a welcome angel or a shift coordinator to get your angel badge and get marked as arrived. + - If you have any questions, you can always ask the shift coordinators behind the counter. + - Attend an angel meeting + - Announced in the Engelsystem news + - Click yourself an interesting shift + - Read shift descriptions first + - Participate in your shift + - Use the navigation to find the right place. + - Arrive a little bit early at the meeting point + - Rest for at least one hour + - Repeat from step 5 + + And always, have a lot of fun. + + To get more insights read [this article](https://jascha.wtf/angels-at-chaos-about-volunteering-and-fitting-in/) + + **[The engelsystem](https://angelguide.c3heaven.de/#_the_engelsystem)** + + The [Engelsystem](https://engel.events.ccc.de/) is the central place to distribute the work to all the helping angels. It can be a bit overwhelming at the beginning but you will get used to it and find your way around. + + As you might have seen there are many different shifts and roles for angels — some sounding more appealing than others. There are shifts where you need to have some knowledge before you can take them. This knowledge is given in introduction meetings or by taking an unrestricted shift in the team and getting trained on the job. These introduction meetings are announced in the Engelsystem under the tab "Meetings". Heaven and the teams try to make sure that there are only restrictions for shifts in place where they are absolutely needed. + + Most restrictions really only need a meeting or some unrestricted shifts at the team to get lifted. Harder restrictions are in place where volunteers need to have special certification, get access to certain systems with a huge amount of data (e.g. mail-queues with emails from participants) or handling big piles of money. Usually the requirements for joining an angeltype are included in the description of the angeltype. + + Especially the restricted shifts are tempting because after all we want to get the event running, aren’t we? From our personal experience what gets the event running are the most common things: Guarding a door, collecting bottle/trash, washing dishes in the angel kitchen, being on standby to hop in when spontaneous help is needed or check the wrist band at the entrance. + + If there are any further questions about angeltypes, the description of the angeltype usually includes contact data such as a DECT number or an e-mail address that can be used. Alternatively, you can also ask one of the persons of the respective angeltype mentioned under "Supporter". + + **[Teams](https://angelguide.c3heaven.de/#_teams)** + + Congress is organized from different teams, each with its own area of expertise. + + All teams are self-organized and provide their own set of services to the event. + + Teams spawn into existence by a need not fulfilled. They are seldom created by an authority. + + Check out the [different teams](https://angelguide.c3heaven.de/#_teams) to see which one suits you best. + + [Some poeple](https://jascha.wtf/angels-at-chaos-about-volunteering-and-fitting-in/) suggest not to try to fit into special roles at your first event. The roles will find you – not the other way around. Our com­mu­ni­ty is not about per­so­nal gro­wing but about con­tri­bu­ting to each other and gro­wing by doing this. + + **Perks** + + Being an angel also comes with some perks. While we hope that participation is reward enough, here is a list of things that are exclusive to angels: + + - Community acknowledgement + - Hanging out in Heaven and the angel hack center with its chill out area + - Free coffee and (sparkling) water + - Warm drinks or similar to make the cold night shifts more bearable + + **Rewards** + + If you have contributed a certain amount of time, you may receive access to: + + - Fantastic hot vegan and vegetarian meals + - The famous limited™ angel T-shirt in Congress design + - Maybe some other perks + +## Feminism + +### [Privileges](privileges.md) + +* New: [Add nice video on male privileges.](privileges.md#videos) + + [La intuición femenina, gracias al lenguaje](https://twitter.com/almuariza/status/1772889815131807765?t=HH1W17VGbQ7K-_XmoCy_SQ&s=19) + +## [Free Knowledge](free_knowledge.md) + +* Correction: Update the way of seeding ill knowledge torrents. + + A good way to contribute is by seeding the ill torrents. You can [generate a list of torrents that need seeding](https://annas-archive.org/torrents#generate_torrent_list) up to a limit in TB. If you follow this path, take care of IP leaking, they're + +## [Free Software](free_software.md) + +* New: Recomendar el artículo El software libre también necesita jardineros. + + - [El software libre también necesita jardineros](https://escritura.social/astrojuanlu/el-software-libre-tambien-necesita-jardineros) + +## [Conference organisation](conference_organisation.md) + +* New: Software to manage the conference. + + There are some open source software that can make your life easier when hosting a conference: + + - [Frab](https://frab.github.io/frab/) + - [Pretalx](https://pretalx.com/p/about/) + - [Wafer](https://wafer.readthedocs.io/en/latest/) + + In addition to the management of talks from the call for papers till the event itself it can help the users visualise the talks schedule with [EventFahrplan](https://github.com/EventFahrplan/EventFahrplan?tab=readme-ov-file) which is what's used in the ChaosComputerClub congress. + + If you also want to coordinate helpers and shifts take a look to [Engelsystem](https://engelsystem.de/en) + +## [Ludditest](luddites.md) + +* New: Nice comic about the luddites. + + [Comic about luddites](https://www.technologyreview.com/2024/02/28/1088262/luddites-resisting-automated-future-technology/) + +# Life Management + +## [Time management](time_management.md) + +* New: [Anticapitalist approach to time management.](time_management.md#anticapitalist-approach-to-time-management) + + Time management is being used to perpetrate the now hegemonic capitalist values. Its a pity because the underlying concepts are pretty useful and interesting but they are oriented towards improving productivity and being able to deal with an increasing amount of work. Basically they're always telling you to be a better cog. It doesn't matter how good you are, there is always room for improvement. I've fallen on this trap for a long time (I'm still getting my head out of the hole) and I'm trying to amend things by applying the concepts on an anticapitalist mindset. The turning point was to read [Four thousand weeks: Time management for mortals by Oliver Burkeman](https://en.wikipedia.org/wiki/Four_Thousand_Weeks:_Time_Management_for_Mortals), the article will have book extracts mixed with my way of thinking. + + Some (or most) of what's written in this article may not apply if you're not a male, white, young, cis, hetero, European, university graduated, able-bodied, """wealthy""" person. You need to be at a certain level of the social ladder to even start thinking in these terms. And depending on the number of oppressions you're suffering you'll have more or less room to maneuver. That margin is completely outside our control so by no means we should feel guilty of not being able to manage time. What follows are just guidelines to deal with this time anxiety imposed by the capitalist system with whatever air we have to breath. + + **Changing the language** + + The easiest way to change the underlying meaning is to change the language. Some substitutions are: + + - `work` -> `focus`: Nowadays we use `work` everywhere even if it's not in the laboral environment. For example *I work very hard to achieve my goals*. Working is the action of selling your time and energies in order to get the resources you need to live. It has an intrinsic meaning of sacrifice, doing something we don't want to do to get another thing in return. That's a tainted way of thinking about your personal time. I find `focus` is a great substitute as it doesn't have all those connotations. There are similar substitutions based on the same argument, such as: `workspace` -> `space of focus`, `workflow` -> `action flow` or just `flow`. + - `task` -> `action`: Similar to `work` a `task` is something you kind of feel obliged to do. It uses a negative mindset to set the perfect scenario to feel guilty when you fail to do them. But you're on your personal time, it should be fine not to do an action for whatever reason. `Action` on the other side fosters a positive way of thinking, it suggests change, movement in a way that helps you move forward. There are also other derived words such as `task manager` -> `action manager`. + - `productivity` -> `efficiency`: `Productivy` is the measurement of how fast or good you create products. And [products are](https://dictionary.cambridge.org/dictionary/english/product) something that is made to be sold. Again this introduces a monetary mindset on all aspects of our life. `Efficiency` on the other side is the quality of achieving the largest amount of change using as little time, energy or effort as possible ([Cambridge](https://dictionary.cambridge.org/dictionary/english/efficiency) doesn't agree with me though :P. It may be because universities are also another important vector of spreading the capitalist values `:(`). So using efficiency we're focusing more on improving the process itself, so it can be applied for example on how to optimize your enjoyment of doing nothing. Which is completely antagonistic to the concept of productivity. + + **Changing the mindset** + + There is a widespread feeling that we're always short on time. We're obsessed with our overfilled inboxes and lengthening todo lists, haunted by the guilty feeling that we ought to be getting more done, or different things done, or both. At the same time we're deluged with advice on living the fully optimized life to squeeze the most from your time. And it get's worse as you age because time seems to speed up as you get older, steadily accelerating until months begging to flash by in what feels like minutes. + + The real problem isn't our limited time. It's that we've unwittingly inherited, and feel pressured to live by a troublesome set of ideas about how to use our limited time, all of which are pretty much guaranteed to make things worse. What follows are a list of mindset changes from the traditional time management bibliography that can set the bases of a healthier Anticapitalist one. + + **Time is not a resource to spend** + + Before timetables life rhythms emerged organically from the tasks they needed to do. You milked the cows when they needed milking and harvested the crops when it was harvest time. Anyone who would tried imposing an external schedule on any of that, for example, doing a month's milking in a single day to get it out of the way would rightly have been considered a lunatic. + + There was no need to think of time as something abstract and separate from life. In those days before clocks, when you did need to explain how long something might take, your only option was to compare it with some other concrete activity. They were untroubled by any notion of time "ticking away" thus living a heightened awareness of the vividness of things, the feeling of timelesness. Also known as living in deep time, or being in the flow, when the boundary separating the self from the rest of reality grows blurry and time stands still. + + There's one huge drawback in giving so little thought to the abstract idea of time, though, which is that it severely limits what you can accomplish. As soon as you want to coordinate the actions of more than a handful of people, you need a reliable, agreed-upon method of measuring time. This is why the first mechanical clocks came to be invented. + + Making time standardized and visible in this fashion inevitably encourages people to think of it as an abstract thing with an independent existence, distinct from the specific activities on which one might spend it. "time" is what ticks away as the hands move around the clock face. + + The next step was to start treating time as a resource, something to be bought and sold and used as efficiently as possible. This mindset shift serves as the precondition for all the uniquely modern ways in which we struggle with time today. Once time is a resource to be used, you start to feel pressure, whether from external forces or from yourself, to use it well, ant to berate yourself when you feel you've wasted it. When you're faced with too many demands, it's easy to assume that the only answer must be to make *better use* of time, by becoming more efficient, driving yourself harder, or working longer instead of asking whether the demands themselves might be unreasonable. + + Soon your sense of self-worth gets completely bound up with how you're using time: it stops being merely the water in which you swim and turns into something you fell you need to dominate or control if you're to avoid feeling guilty, panicked or overwhelmed. + + The fundamental problem is that this attitude towards time sets up a rigged game in which it's impossible ever to feel as though you're doing well enough. Instead of simply living our lives as they unfold in time it becomes difficult not to value each moment primarily according to its usefulness for some future goal, or for some future oasis of relaxation you hope to reach once your tasks are finally "out of the way". + + Ultimately it backfires. It wrenches us out of the present, leading to a life spent leaning into the future, worrying about whether things will work out, experiencing everything in terms of some later, hoped-for benefit, so that peace of mind never quite arrives. And it makes it all but impossible to experience *the flow*, that sense of timeless time which depends on forgetting the abstract yardstick and plunging back into the vividness of reality instead. + + **If you don't disavow capitalism an increase in efficiency will only make things worse** + + All this context makes us eager to believe the promises of time management frameworks (like [GTD](gtd.md)) that if you improve your efficiency you'll get more time to enjoy your life. If you follow the right time management system, build the right habits, and apply sufficient self-discipline, you will win the struggle with time. + + Reality then kicks in you never win the struggle and only feel more stressed and unhappy. You realize that all the time you've saved is automatically filled up by more things to do in a never ending feedback loop. It's true that you get more done, and yet, paradoxically, you only feel busier, more anxious and somehow emptier as a result. Time feels like an unstoppable conveyor belt, bringing us new actions as fast as we can dispatch the old ones; and becoming more efficient just seems to cause the belt to speed up. Or else, eventually, to break down. + + It also has another side-effect. As life accelerates, everyone grows more impatient. It's somehow vastly more aggravating to wait two minutes for the microwave than two hours for the oven, or ten seconds for a slow loading web page versus three days to receive the same information by mail. + + Denying reality never works though. It may provide some immediate relief, because it allows you to go on thinking that at some point in the future you might, at last, feel totally in control. But it can't ever bring the sense that you're doing enough (that you *are* enough) because it defines *enough* as a kind of limitless control that no human can attain. Instead, the endless struggle leads to more anxiety and less fulfilling life. For example, the more you believe yo might succeed in "fitting everything in", the more commitments you naturally take on, and the less you feel the need to ask whether each new commitment is truly worth a portion of your time, and so your days inevitably fill with more activities you don't especially value. The more you hurry, the more frustrating it is to encounter tasks that won't be hurried, the more compulsively you plan for the future, the more anxious you feel about any remaining uncertainties, of which there will always be plenty. + + Time management used this way serves as a distraction to numb our minds: + + - It may hide the sense of precariousness inherent to the capitalist world we live in. If you could meet every boss's demand, while launching various side projects on your own, maybe one day You'd finally feel secure in your career and your finances. + - Divert your energies from fully experiencing the reality in which you find yourself, holding at bay certain scary questions about what you're doing with your life, and whether major changes might not be needed. As long as you're always just on the cusp of mastering time, you can avoid the thought that what life is really demanding from you might involve surrendering the craving for mastery and diving into the unknown instead. + + **Embrace the finitude of time** + + We recoil from the notion that this is it. That *this life*, with all its flaws and inescapable vulnerabilities, its extreme brevity, and our limited influence over how it unfolds, is the only one we'll get a shot at. Instead, we mentally fight against the way things are, so that we don't have to consciously participate in what it's like to feel claustrophobic, imprisoned, powerless, and constrained by reality. + + Our troubled relationship with time arises largely from this same effort to avoid the painful constrains of reality. And most of our strategies for becoming more efficient make things worse, because they're really just ways of furthering the avoidance. After all, it's painful to confront how limited your time is, because it means that tough choices are inevitable and that you won't have time for all you once dreamed you might do. It's also painful to accept the limited control over the time you do get: maybe you simply lack the stamina or talent or other resources to perform well in all the roles you feel you should. And so, rather than face our limitations, we engage in avoidance strategies, in an effort to carry on feeling limitless. We push ourselves harder, chasing fantasies of the perfect work-life balance, or we implement time management systems that promise to make time for everything, so that tough choices won't be required. Or we procrastinate, which is another means of maintaining the feeling of omnipotent control over life, because you needn't risk the upsetting experience of failing at an intimidating project if you never even start it. We fill our minds with busyness and distraction to numb ourselves emotionally. Or we plan compulsively, because the alternative is to confront how little control over the future we really have. + + **Heal yourself from FOMO** + + In practical terms, a limit-embracing attitude to time means organizing your days with the understanding that you definitely won't have time for everything you want to do, or that other people want you to do, and so, at the very least, you can stop beating yourself up for failing. Since hard choices are unavoidable, what matters is learning to make them consciously, deciding what to focus on and what to neglect, rather than letting them get made by default, or deceiving yourself that, with enough hard work and the right time management tricks, you might not have to make them at all. It also means resisting the temptation to "keep your options open" in favour of deliberately making big, daunting, irreversible commitments, which you can't know in advance will turn out for the best, but which reliably prove more fulfilling in the end. And it means standing firm in the face of FOMO (fear of missing out) because you come to realize that missing out on something (indeed on almost everything) is basically guaranteed. Which isn't actually a problem anyway, it turns to, because "missing out" is what makes your choices meaningful in the first place. Every decision to use a portion of time on anything represents the sacrifice of all the other ways in which you could have spent that time, but didn't, and to willingly make that sacrifice is to take a stand, without reservation, on what matters most to you. + + **Embrace your control limits** + + The more you try to manage your time with the goal of achieving a feeling of total control and freedom from the inevitable constrains of being human, the more stressful, empty, and frustrating life gets. But the more you confront the facts of finitude instead, and work with them, rather than against them, the more efficient, meaningful and joyful life becomes. Anxiety won't ever completely go away, we're even limited, apparently, in our capacity to embrace our limitations. But I'm aware of no other time management technique that's half as effective as just facing the way things truly are. + + Time pressure comes largely from forces outside our control: from a cutthroat economy; from the loss of the social safety networks that used to help ease the burdens of work and childcare; and from the sexist expectation that women must excel in their careers while assuming most of the responsibilities at home. None of that will be solved with time management. Fully facing the reality of it can only help though. So long as you continue to respond to impossible demands on your time by trying to persuade yourself that you might one day find some way to do the impossible, you're implicitly collaboration with those demands. Whereas once you deeply grasp that they are impossible, you'll stop believing the delusion that any of that is ever going to bring satisfaction and will be newly empowered to resist them, letting you focus instead on building the most meaningful life you can, in whatever situation you're in. + + Seeing and accepting our limited powers over our time can prompt us to question the very idea that time is something you use in the first place. There is an alternative: the notion of letting time use you, approaching life not as an opportunity to implement your predetermined plans for success but as a matter of responding to the needs of your place and your moment in history. + + **Embrace the community constrains** + + Moreover, most of us seek a specifically individualistic kind of mastery over time. Our culture's ideal is that you alone should control your schedule, doing whatever you prefer, whenever you want, because it's scary to confront the truth that almost everything worth doing depends on cooperating with others, and therefore on exposing yourself to the emotional uncertainties of relationships. In the end the more individual sovereignty you achieve over your time, the lonelier you get. The truth then is that freedom sometimes is to be found not in achieving greater sovereignty over your own schedule but in allowing yourself to be constrained by the rhythms of community. Participating in forms of social life where you don't get to decide exactly what you do or when you doi it. And it leads to the insight that meaningful efficiency often comes not from hurrying things up but from letting them take the time they take. + + **Live for today not tomorrow** + + It doesn't matter what you do, we all sense that there are always more important and fulfilling ways we could be spending our time, even if we can't say exactly what they are, yet we systematically spend our days doing other things instead. This feeling can take many forms: the desire to devote yourself to some larger cause, continuously demanding more from yourself, desiring to spend more time with your loved ones. + + Our attempts to become more efficient may have the effect of pushing the genuinely important stuff even further over the horizon. Our days are spent trying to "get through" tasks, in order to get them "out of the way", with the result that we live mentally in the future, waiting for when we'll finally get around to what really matters, and worrying in the meantime, that we don't measure up, that we might lack the drive or stamina to keep pace with the speed at which life now seems to move. We live in a constant spirit of joyless urgency. + +* New: [Time is not a resource to be tamed.](time_management.md#time-is-not-a-resource-to-be-tamed) + + You'll see everywhere the concept of `time management`. I feel it's daring to suggest that you have the power to actually manage time. No you can't as much as you can't tame the sea. [Time is not a resource to be spent or be managed](#time-is-not-a-resource-to-be-spent), the best we can do is to try to understand its flows and navigate it the best we can. + +* New: Keep on summing up Oliver Burkeman book. + + **Efficiency doesn't necessarily give you more time** + + We're eager to believe the promises of time management frameworks (like [GTD](gtd.md)) that if you improve your efficiency you'll get more time to enjoy your life. If you follow the right time management system, build the right habits, and apply sufficient self-discipline, you will win the struggle with time. + + Reality then kicks in you never win the struggle and only feel more stressed and unhappy. You realize that all the time you've saved is automatically filled up by more things to do in a never ending feedback loop. Time feels like an unstoppable conveyor belt, bringing us new actions as fast as we can dispatch the old ones; and becoming more efficient just seems to cause the belt to speed up. Or else, eventually, to break down. It's true that you get more done, and yet, paradoxically, you only feel busier, more anxious and somehow emptier as a result. + + It get's even worse because [importance is relative](time_management.md#importance-is-relative) and you may fall into [efficiency traps](time_management.md#be-mindful-of-the-efficiency-trap). + + **Heal yourself from FOMO** + + Another problem that FOMO brings us is that it leads us to lives where you "truly lived" only if you've lived all the experiences you could live. This leads to a frustrating life as the world has infinite of them, so getting a handful of them under your belt brings you no closer to a sense of having feasted on life's possibilities. You lead yourself in another [efficiency trap](#be-mindful-of-the-efficiency-trap) where the more you experience the more additional wonderful experiences you sarta to feel you could have on top of all those you've already had, with the result that the feeling of existential overwhelm gets worse. To fight this existential overwhelm you can resist the urge to consume more and more experiences and embrace the idea that you're going to miss most of them. You'll then be able to focus on fully enjoying the tiny slice of experiences you actually do have time for. + + This FOMO fever is normal given the facts that we're more conscious of the limits of our time (after deterring the after life), the increase of choices that the world has brought us, and the internet amplifier. + + **You do what you can do** + + It's usual to feel as though you absolutely must do more than you can do. We live overwhelmed in a constant anxiety of fearing, or knowing for certain, that the actions we want to carry out won't fit on our available time. It looks like this feeling arises on every step of the economic ladder (shown in the works of Daniel Markovits). + + The thing is that the idea in itself doesn't make any sense. You can't do more than you can do even if you must. If you truly don't have time for everything you want to do, or feel you ought to do, or that others are badgering you to do, then, well, you don't have time, no matter how grave the consequences of failing to do it all might prove to be. So technically it's irrational to feel troubled by an overwhelming to-do list. You'll do what you can, you won't do what you can't, and the tyrannical inner voice insisting that you must do everything is simply mistaken. We rarely stop to consider things so rationally, though, because that would mean confronting the painful truth of our limitations. We would be forced to acknowledge that there are hard choices to be made: which balls to let drop, which people to disappoint, which ambitions to abandon, which roles to fail at... Instead, in an attempt to avoid these unpleasant truths, we deploy the strategy that dominates most conventional advice on how to deal with busyness: we tell ourselves we'll just have to find a way to do more. So to address our busyness we're making ourselves busier still. + + **Importance is relative** + + The problem here is that you'll never be able to make time for everything that feels important. A similar mindset of the section [Efficiency doesn't give you more time](#efficiency-doesnt-give-you-more-time) can be applied. The reason isn't that you haven't yet discovered the right time management tricks, or applied sufficient effort, or that you're generally useless. It's that the underlying assumption is unwarranted: there's no reason to believe you'll make time for everything that matters simply by getting more done. For a start, what "matters" is subjective, so you've no grounds for assuming that there will be time for everything that you, or anyone else deems important. But the other exasperating issue is that if you succeed in fitting more in, you'll find the goalposts start to shift: more things will begin to seem important, meaningful or obligatory. Acquire a reputation for doing your work at amazing speed, and you'll be given more of it. An example of this is gathered in Ruth Schwartz's book More work for mother, where it shows that when washing machines and vacuum cleaners appeared no time was saved at all, because society's standards of cleanliness rose to offset the benefits. What needs doing expands so as to fill the time available for its completion. + + **Be mindful of the efficiency trap** + + Sometimes improving your efficiency may lead you to a worse scenario ("efficiency trap") where you won't generally result in the feeling of having "enough time", because, all else being equal, the demands will increase to offset any benefits. Far from getting things done, you'll be creating new things to do. A clear example of this is email management. Every time you reply to an email, there's a good chance of provoking a reply to that email, which itself may require another reply, and so on and so on. At the same time, you'll become known as someone who responds promptly to email, so more people will consider it worth their while to message you to begin with. So it's not simply that you never get though your email; it's that the process of "getting through your email" actually generates more email. + + For most of us, most of the time, it isn't feasible to avoid the efficiency trap altogether, but you can stop believing you'll ever solve the challenge of busyness by cramming more in, because that just makes matters worse. And once you stop investing in the idea that you might one day achieve peace of mind that way, it becomes easier to find peace of mind in the present, in the midst of overwhelming demands, because you're no longer making your peace of mind dependent on dealing with all the demands. Once you stop believing that it might somehow be possible to avoid hard choices about time, it gets easier to make better ones. + + If you also have the knowledge of the existence of the efficiency traps you may detect them and try to get the benefits without the penalties. + + **Do the important stuff** + + The worst aspect of the trap is that it's also a matter of quality. The harder you struggle to fit everything in, the more of your time you'll find yourself spending on the least meaningful things. This is because the more firmly you believe it ought to be possible to find time for everything, the less pressure you'll feel to ask whether any given activity sis the best use of a portion of your time. Each time something new shows up, you'll be strongly biased in favor of accepting it, because you'll assume you needn't sacrifice any other tasks or opportunities in order to make space for it. Soon your life will be automatically filled with not just more things but with more trivial or tedious things. + + The important stuff gets postponed because such tasks need your full focus, which means to wait until you have a good chunk of free time and fewer small-but-urgent tasks tugging at your attention. So you spend your energy into clearing the decks, cranking through the smaller stuff to get it out of the way, only to discover that doing so takes the whole day, that the decks are filled up again overnight and that the moment for doing the important stuff never arrives. One can waste years this way, systematically postponing precisely the things one cares the most. + + What's needed in these situations is to resist the urges of being on top of everything and learn to live with the anxiety of feeling overwhelmed without automatically responding by trying to fit more in. Instead of clearing the decks, decline to do so, focusing instead on what's truly of greatest consequence while tolerating the discomfort of knowing that, as you do so, the decks will be filling up further, with emails and errands and other to-dos, many of which you may never get around to at all. + + You'll sometimes still decide to drive yourself hard in an effort to squeeze more in, when circumstances absolutely require it. But that won't be your default mode, because you'll no longer be operating under the illusion of one day making time for everything. + + **Evaluate what you miss when you increase your efficiency** + + Part of the benefits of efficiency is that you free yourself from tedious experiences, the side effect is that some times we're not conscious of being removing experiences that we valued. So even if everything runs more smoothly, smoothness is a dubious virtue, since it's often the unsmoothed textures of life that makes them livable, helping nurture the relationships that are crucial for mental and physical health, and for the resilience of our communities. For example if you buy online the groceries you miss the chance to regularly meet with your neighbours at your local grocery store. + + Convenience makes things easy, but without regard to whether easiness is truly what's most valuable in any given context. When you render the process more convenient you drain it of its meaning. The effect of convenience isn't just that the given activity starts to feel less valuable, but that we stop engaging in certain valuable activities altogether, in favour of more convenient ones. Because you can stay home, order food online, and watch sitcoms on a streaming service, you find yourself doing so although you might be perfectly aware that you'd have had a better time if you had met with your friends. + + Meanwhile, those aspects of life that resist being made to run more smoothly start to seem repellent. When you can skip the line and buy concert tickets on your phone, waiting in line to vote in an election is irritating. As convenience colonizes everyday life, activities gradually sort themselves into two types: the kind that are now far more convenient, but that feel empty or out of sync with our true preferences; and the kind that now seem intensely annoying because of how inconvenient they remain. Resisting all this is difficult because the Capital is winning this discourse and you'll have more pressure from your environment to stay convenient. + +### [vdirsyncer](vdirsyncer.md) + +* New: [Troubleshoot Database is locked.](vdirsyncer.md#database-is-locked) + + First try to kill all stray vdirsyncer processes, if that doesn't work check for more solutions in [this issue](https://github.com/pimutils/vdirsyncer/issues/720) + +* New: [Sync to a read-only ics.](vdirsyncer.md#sync-to-a-read-only-ics) + + ```ini + [pair calendar_name] + a = "calendar_name_local" + b = "calendar_name_remote" + collections = null + conflict_resolution = ["command", "vimdiff"] + metadata = ["displayname", "color"] + + [storage calendar_name_local] + type = "filesystem" + path = "~/.calendars/calendar_name" + fileext = ".ics" + + [storage calendar_name_remote] + type = "http" + url = "https://example.org/calendar.ics" + +* New: [Automatically sync calendars.](vdirsyncer.md#automatically-sync-calendars) + + You can use the script shown in the [automatically sync emails](#script-to-sync-emails-and-calendars-with-different-frequencies) + +### [Org Mode](orgmode.md) + +* New: [Start working on a task dates.](orgmode.md#start-working-on-a-task-dates) + + `SCHEDULED` defines when you are plan to start working on that task. + + The headline is listed under the given date. In addition, a reminder that the scheduled date has passed is present in the compilation for today, until the entry is marked as done or [disabled](#how-to-deal-with-overdue-SCHEDULED-and-DEADLINE-tasks). + + ```org + *** TODO Call Trillian for a date on New Years Eve. + SCHEDULED: <2004-12-25 Sat> + ``` + + Although is not a good idea (as it promotes the can pushing through the street), if you want to delay the display of this task in the agenda, use `SCHEDULED: <2004-12-25 Sat -2d>` the task is still scheduled on the 25th but will appear two days later. In case the task contains a repeater, the delay is considered to affect all occurrences; if you want the delay to only affect the first scheduled occurrence of the task, use `--2d` instead. + + Scheduling an item in Org mode should not be understood in the same way that we understand scheduling a meeting. Setting a date for a meeting is just [a simple appointment](#appointments), you should mark this entry with a simple plain timestamp, to get this item shown on the date where it applies. This is a frequent misunderstanding by Org users. In Org mode, scheduling means setting a date when you want to start working on an action item. + + You can set it with `s` (Default: `ois`) + +* New: [Deadlines.](orgmode.md#deadlines) + + `DEADLINE` are like [appointments](#appointments) in the sense that it defines when the task is supposed to be finished on. On the deadline date, the task is listed in the agenda. The difference with appointments is that you also see the task in your agenda if it is overdue and you can set a warning about the approaching deadline, starting `org_deadline_warning_days` before the due date (14 by default). It's useful then to set `DEADLINE` for those tasks that you don't want to miss that the deadline is over. + + An example: + + ```org + * TODO Do this + DEADLINE: <2023-02-24 Fri> + ``` + + You can set it with `d` (Default: `oid`). + + If you need a different warning period for a special task, you can specify it. For example setting a warning period of 5 days `DEADLINE: <2004-02-29 Sun -5d>`. + + If you're as me, you may want to remove the warning feature of `DEADLINES` to be able to keep your agenda clean. Most of times you are able to finish the task in the day, and for those that you can't specify a `SCHEDULED` date. To do so set the default number of days to `0`. + + ```lua + require('orgmode').setup({ + org_deadline_warning_days = 0, + }) + ``` + + Using too many tasks with a `DEADLINE` will clutter your agenda. Use it only for the actions that you need to have a reminder, instead try to using [appointment](#appointments) dates instead. The problem of using appointments is that once the date is over you don't get a reminder in the agenda that it's overdue, if you need this, use `DEADLINE` instead. + +* New: [How to deal with overdue SCHEDULED and DEADLINE tasks.](orgmode.md#how-to-deal-with-overdue-scheduled-and-deadline-tasks.) +* New: Introduce org-rw. + + [`org-rw`](https://github.com/kenkeiras/org-rw) is a Python library to process your orgmode files. + + Installation: + + ```bash + pip install org-rw + ``` + + Load an orgmode file: + + ```python + from org_rw import load + + with open('your_file.org', 'r') as f: + doc = load(f) + ``` + +* New: [Install using lazyvim.](orgmode.md#using-lazyvim) + + ```lua + return { + 'nvim-orgmode/orgmode', + ```lua + { + 'nvim-orgmode/orgmode', + dependencies = { + { 'nvim-treesitter/nvim-treesitter', lazy = true }, + }, + event = 'VeryLazy', + config = function() + -- Load treesitter grammar for org + require('orgmode').setup_ts_grammar() + + -- Setup treesitter + require('nvim-treesitter.configs').setup({ + highlight = { + enable = true, + additional_vim_regex_highlighting = { 'org' }, + }, + ensure_installed = { 'org' }, + }) + + -- Setup orgmode + require('orgmode').setup({ + org_agenda_files = '~/orgfiles/**/*', + org_default_notes_file = '~/orgfiles/refile.org', + }) + end, + } + ``` + dependencies = { + { 'nvim-treesitter/nvim-treesitter', lazy = true }, + }, + event = 'VeryLazy', + config = function() + -- Load treesitter grammar for org + require('orgmode').setup_ts_grammar() + + -- Setup treesitter + require('nvim-treesitter.configs').setup({ + highlight = { + enable = true, + additional_vim_regex_highlighting = { 'org' }, + }, + ensure_installed = { 'org' }, + }) + + -- Setup orgmode + require('orgmode').setup({ + org_agenda_files = '~/orgfiles/**/*', + org_default_notes_file = '~/orgfiles/refile.org', + }) + end, + } + ``` + +* New: [Troubleshoot orgmode with dap.](orgmode.md#troubleshoot-orgmode-with-dap) + + Use the next config and follow the steps of [Create an issue in the orgmode repository](orgmode.md#create-an-issue-in-the-orgmode-repository). + + ```lua + vim.cmd([[set runtimepath=$VIMRUNTIME]]) + vim.cmd([[set packpath=/tmp/nvim/site]]) + + local package_root = '/tmp/nvim/site/pack' + local install_path = package_root .. '/packer/start/packer.nvim' + + local function load_plugins() + require('packer').startup({ + { + 'wbthomason/packer.nvim', + { 'nvim-treesitter/nvim-treesitter' }, + { 'nvim-lua/plenary.nvim'}, + { 'nvim-orgmode/orgmode'}, + { 'nvim-telescope/telescope.nvim'}, + { 'lyz-code/telescope-orgmode.nvim' }, + { 'jbyuki/one-small-step-for-vimkind' }, + { 'mfussenegger/nvim-dap' }, + { 'kristijanhusak/orgmode.nvim', branch = 'master' }, + }, + config = { + package_root = package_root, + compile_path = install_path .. '/plugin/packer_compiled.lua', + }, + }) + end + + _G.load_config = function() + require('orgmode').setup_ts_grammar() + require('nvim-treesitter.configs').setup({ + highlight = { + enable = true, + additional_vim_regex_highlighting = { 'org' }, + }, + }) + + vim.cmd([[packadd nvim-treesitter]]) + vim.cmd([[runtime plugin/nvim-treesitter.lua]]) + vim.cmd([[TSUpdateSync org]]) + + -- Close packer after install + if vim.bo.filetype == 'packer' then + vim.api.nvim_win_close(0, true) + end + + require('orgmode').setup({ + org_agenda_files = { + './*' + } + } + ) + + -- Reload current file if it's org file to reload tree-sitter + if vim.bo.filetype == 'org' then + vim.cmd([[edit!]]) + end + end + if vim.fn.isdirectory(install_path) == 0 then + vim.fn.system({ 'git', 'clone', 'https://github.com/wbthomason/packer.nvim', install_path }) + load_plugins() + require('packer').sync() + vim.cmd([[autocmd User PackerCompileDone ++once lua load_config()]]) + else + load_plugins() + load_config() + end + + require('telescope').setup{ + defaults = { + preview = { + enable = true, + treesitter = false, + }, + vimgrep_arguments = { + "ag", + "--nocolor", + "--noheading", + "--numbers", + "--column", + "--smart-case", + "--silent", + "--follow", + "--vimgrep", + }, + file_ignore_patterns = { + "%.svg", + "%.spl", + "%.sug", + "%.bmp", + "%.gpg", + "%.pub", + "%.kbx", + "%.db", + "%.jpg", + "%.jpeg", + "%.gif", + "%.png", + "%.org_archive", + "%.flf", + ".cache", + ".git/", + ".thunderbird", + ".nas", + }, + mappings = { + i = { + -- Required so that folding works when opening a file in telescope + -- https://github.com/nvim-telescope/telescope.nvim/issues/559 + [""] = function() + vim.cmd [[:stopinsert]] + vim.cmd [[call feedkeys("\")]] + end, + [''] = 'move_selection_next', + [''] = 'move_selection_previous', + } + } + }, + pickers = { + find_files = { + find_command = { "rg", "--files", "--hidden", "--glob", "!**/.git/*" }, + hidden = true, + follow = true, + } + }, + extensions = { + fzf = { + fuzzy = true, -- false will only do exact matching + override_generic_sorter = true, -- override the generic sorter + override_file_sorter = true, -- override the file sorter + case_mode = "smart_case", -- or "ignore_case" or "respect_case" + -- the default case_mode is "smart_case" + }, + heading = { + treesitter = true, + }, + } + } + + require('telescope').load_extension('orgmode') + + local key = vim.keymap + vim.g.mapleader = ' ' + + local builtin = require('telescope.builtin') + key.set('n', 'f', builtin.find_files, {}) + key.set('n', 'F', ':Telescope file_browser') + + vim.api.nvim_create_autocmd('FileType', { + pattern = 'org', + group = vim.api.nvim_create_augroup('orgmode_telescope_nvim', { clear = true }), + callback = function() + vim.keymap.set('n', 'r', require('telescope').extensions.orgmode.refile_heading) + vim.keymap.set('n', 'g', require('telescope').extensions.orgmode.search_headings) + end, + }) + + require('orgmode').setup_ts_grammar() + local org = require('orgmode').setup({ + org_agenda_files = { + "./*" + }, + org_todo_keywords = { 'TODO(t)', 'CHECK(c)', 'DOING(d)', 'RDEACTIVATED(r)', 'WAITING(w)', '|','DONE(e)', 'REJECTED(j)', 'DUPLICATE(u)' }, + org_hide_leading_stars = true, + org_deadline_warning_days = 0, + win_split_mode = "horizontal", + org_priority_highest = 'A', + org_priority_default = 'C', + org_priority_lowest = 'D', + mappings = { + global = { + org_agenda = 'ga', + org_capture = ';c', + }, + org = { + -- Enter new items + org_meta_return = '', + org_insert_heading_respect_content = ';', + org_insert_todo_heading = "", + org_insert_todo_heading_respect_content = ";t", + + -- Heading promoting and demoting + org_toggle_heading = 'h', + org_do_promote = 'b', [[:lua require"dap".toggle_breakpoint()]], { noremap = true }) + vim.api.nvim_set_keymap('n', 'c', [[:lua require"dap".continue()]], { noremap = true }) + vim.api.nvim_set_keymap('n', 'n', [[:lua require"dap".step_over()]], { noremap = true }) + vim.api.nvim_set_keymap('n', 'm', [[:lua require"dap".repl.open()]], { noremap = true }) + vim.api.nvim_set_keymap('n', 'N', [[:lua require"dap".step_into()]], { noremap = true }) + vim.api.nvim_set_keymap('n', '', [[:lua require"dap.ui.widgets".hover()]], { noremap = true }) + vim.api.nvim_set_keymap('n', '', [[:lua require"osv".launch({port = 8086})]], { noremap = true }) + ``` + +* New: [Hide the state changes in the folds.](orgmode.md#hide-the-state-changes-in-the-folds) + + The folding of the recurring tasks iterations is also kind of broken. For the next example + + ```orgmode + ** TODO Recurring task + DEADLINE: <2024-02-08 Thu .+14d -0d> + :PROPERTIES: + :LAST_REPEAT: [2024-01-25 Thu 11:53] + :END: + - State "DONE" from "TODO" [2024-01-25 Thu 11:53] + - State "DONE" from "TODO" [2024-01-10 Wed 23:24] + - State "DONE" from "TODO" [2024-01-03 Wed 19:39] + - State "DONE" from "TODO" [2023-12-11 Mon 21:30] + - State "DONE" from "TODO" [2023-11-24 Fri 13:10] + + - [ ] Do X + ``` + + When folded the State changes is not added to the Properties fold. It's shown something like: + + ```orgmode + ** TODO Recurring task + DEADLINE: <2024-02-08 Thu .+14d -0d> + :PROPERTIES:... + - State "DONE" from "TODO" [2024-01-25 Thu 11:53] + - State "DONE" from "TODO" [2024-01-10 Wed 23:24] + - State "DONE" from "TODO" [2024-01-03 Wed 19:39] + - State "DONE" from "TODO" [2023-12-11 Mon 21:30] + - State "DONE" from "TODO" [2023-11-24 Fri 13:10] + + - [ ] Do X + ``` + + I don't know if this is a bug or a feature, but when you have many iterations it's difficult to see the task description. So it would be awesome if they could be included into the properties fold or have their own fold. + + I've found though that if you set the [`org_log_into_drawer = "LOGBOOK"` in the config](https://github.com/nvim-orgmode/orgmode/issues/455) this is fixed. + +* New: [Things that are still broken or not developed.](orgmode.md#things-that-are-still-broken-or-not-developed) + + - [The agenda does not get automatically refreshed](https://github.com/nvim-orgmode/orgmode/issues/656) + - [Uncheck checkboxes on recurring tasks once they are completed](https://github.com/nvim-orgmode/orgmode/issues/655) + - [Foldings when moving items around](https://github.com/nvim-orgmode/orgmode/issues/524) + - [Refiling from the agenda](https://github.com/nvim-orgmode/orgmode/issues/657) + - [Interacting with the logbook](https://github.com/nvim-orgmode/orgmode/issues/149) + +* Correction: Rename Task to Action. + + To remove the productivity capitalist load from the concept + +* New: [How to deal with recurring tasks that are not yet ready to be acted upon.](orgmode.md#how-to-deal-with-recurring-tasks-that-are-not-yet-ready-to-be-acted-upon) + + By default when you mark a recurrent task as `DONE` it will transition the date (either appointment, `SCHEDULED` or `DEADLINE`) to the next date and change the state to `TODO`. I found it confusing because for me `TODO` actions are the ones that can be acted upon right now. That's why I'm using the next states instead: + + - `INACTIVE`: Recurrent task which date is not yet close so you should not take care of it. + - `READY`: Recurrent task which date [is overdue](#how-to-deal-with-overdue-SCHEDULED-and-DEADLINE-tasks), we acknowledge the fact and mark the date as inactive (so that it doesn't clobber the agenda). + + The idea is that once an INACTIVE task reaches your agenda, either because the warning days of the `DEADLINE` make it show up, or because it's the `SCHEDULED` date you need to decide whether to change it to `TODO` if it's to be acted upon immediately or to `READY` and deactivate the date. + + `INACTIVE` then should be the default state transition for the recurring tasks once you mark it as `DONE`. To do this, set in your config: + + ```lua + org_todo_repeat_to_state = "INACTIVE", + ``` + + If a project gathers a list of recurrent subprojects or subactions it can have the next states: + + - `READY`: If there is at least one subelement in state `READY` and the rest are `INACTIVE` + - `TODO`: If there is at least one subelement in state `TODO` and the rest may have `READY` or `INACTIVE` + - `INACTIVE`: The project is not planned to be acted upon soon. + - `WAITING`: The project is planned to be acted upon but all its subelements are in `INACTIVE` state. + +* New: [Debug doesn't go up in the jump list.](orgmode.md#-doesn't-go-up-in-the-jump-list) + + It's because [ is a synonym of ](https://github.com/neovim/neovim/issues/5916), and `org_cycle` is [mapped by default as ](https://github.com/nvim-orgmode/orgmode/blob/c0584ec5fbe472ad7e7556bc97746b09aa7b8221/lua/orgmode/config/defaults.lua#L146) + If you're used to use `zc` then you can disable the `org_cycle` by setting the mapping `org_cycle = ""`. + +* New: [Python libraries.](orgmode.md#python-libraries) + + **[org-rw](https://code.codigoparallevar.com/kenkeiras/org-rw)** + + `org-rw` is a library designed to handle Org-mode files, offering the ability to modify data and save it back to the disk. + + - **Pros**: + - Allows modification of data and saving it back to the disk + - Includes tests to ensure functionality + + - **Cons**: + - Documentation is lacking, making it harder to understand and use + - The code structure is complex and difficult to read + - Uses `unittest` instead of `pytest`, which some developers may prefer + - Tests are not easy to read + - Last commit was made five months ago, indicating potential inactivity + - [Not very popular]( https://github.com/kenkeiras/org-rw), with only one contributor, three stars, and no forks + + **[orgparse](https://github.com/karlicoss/orgparse)** + + `orgparse` is a more popular library for parsing Org-mode files, with better community support and more contributors. However, it has significant limitations in terms of editing and saving changes. + + - **Pros**: + - More popular with 13 contributors, 43 forks, and 366 stars + - Includes tests to ensure functionality + - Provides some documentation, available [here](https://orgparse.readthedocs.io/en/latest/) + + - **Cons**: + - Documentation is not very comprehensive + - Cannot write back to Org-mode files, limiting its usefulness for editing content + - The author suggests using [inorganic](https://github.com/karlicoss/inorganic) to convert Org-mode entities to text, with examples available in doctests and the [orger](https://github.com/karlicoss/orger) library. + - `inorganic` is not popular, with one contributor, four forks, 24 stars, and no updates in five years + - The library is only 200 lines of code + - The `ast` is geared towards single-pass document reading. While it is possible to modify the document object tree, writing back changes is more complicated and not a common use case for the author. + + **[Tree-sitter](https://tree-sitter.github.io/tree-sitter/)** + + Tree-sitter is a powerful parser generator tool and incremental parsing library. It can build a concrete syntax tree for a source file and efficiently update the syntax tree as the source file is edited. + + - **Pros**: + - General enough to parse any programming language + - Fast enough to parse on every keystroke in a text editor + - Robust enough to provide useful results even in the presence of syntax errors + - Dependency-free, with a runtime library written in pure C + - Supports multiple languages through community-maintained parsers + - Used by Neovim, indicating its reliability and effectiveness + - Provides good documentation, available [here](https://tree-sitter.github.io/tree-sitter/using-parsers) + - Python library, [py-tree-sitter](https://github.com/tree-sitter/py-tree-sitter), simplifies the installation process + + - **Cons**: + - Requires installation of Tree-sitter and the Org-mode language parser separately + - The Python library does not handle the Org-mode language parser directly + + To get a better grasp of Tree-sitter you can check their talks: + + - [Strange Loop 2018](https://www.thestrangeloop.com/2018/tree-sitter---a-new-parsing-system-for-programming-tools.html) + - [FOSDEM 2018](https://www.youtube.com/watch?v=0CGzC_iss-8) + - [Github Universe 2017](https://www.youtube.com/watch?v=a1rC79DHpmY). + + **[lazyblorg orgparser.py](https://github.com/novoid/lazyblorg/blob/master/lib/orgparser.py)** + + `lazyblorg orgparser.py` is another tool for working with Org-mode files. However, I didn't look at it. + +* Correction: [Tweak area concept.](time_management_abstraction_levels.md#area) + + Model a group of projects that follow the same interest, roles or accountabilities. These are not things to finish but rather to use as criteria for analyzing, defining a specific aspect of your life and to prioritize its projects to reach a higher outcome. We'll use areas to maintain balance and sustainability on our responsibilities as we operate in the world. Areas' titles don't contain verbs as they don't model actions. An example of areas can be *health*, *travels* or *economy*. + + To filter the projects by area I set an area tag that propagates downstream. To find the area documents easily I add a section in the `index.org` of the documentation repository. For example: + +* New: [Change the default org-todo-keywords.](org_rw.md#change-the-default-org-todo-keywords) + + ```python + orig = '''* NEW_TODO_STATE First entry + + * NEW_DONE_STATE Second entry''' + doc = loads(orig, environment={ + 'org-todo-keywords': "NEW_TODO_STATE | NEW_DONE_STATE" + }) + ``` + + +### [Orgzly](orgzly.md) + +* New: Migrate from Orgzly to Orgzly Revived. + +### [Gancio](roadmap_adjustment.md) + +* Correction: Change the concept of `Task` for `Action`. + + To remove the capitalist productive mindset from the concept + +* Correction: [Action cleaning.](roadmap_adjustment.md#action-cleaning) + + Marking steps as done make help you get an idea of the evolution of the action. It can also be useful if you want to do some kind of reporting. On the other hand, having a long list of done steps (specially if you have many levels of step indentation may make the finding of the next actionable step difficult. It's a good idea then to often clean up all done items. + + - For non recurring actions use the `LOGBOOK` to move the done steps. for example: + ```orgmode + ** DOING Do X + :LOGBOOK: + - [x] Done step 1 + - [-] Doing step 2 + - [x] Done substep 1 + :END: + - [-] Doing step 2 + - [ ] substep 2 + ``` + + This way the `LOGBOOK` will be automatically folded so you won't see the progress but it's at hand in case you need it. + + - For recurring actions: + - Mark the steps as done + - Archive the todo element. + - Undo the archive. + - Clean up the done items. + + This way you have a snapshot of the state of the action in your archive. + +* New: [Project cleaning.](roadmap_adjustment.md#project-cleaning) + + Similar to [action cleaning](#action-cleaning) we want to keep the state clean. If there are not that many actions under the project we can leave the done elements as `DONE`, once they start to get clobbered up we can create a `Closed` section. + + For recurring projects: + + - Mark the actions as done + - Archive the project element. + - Undo the archive. + - Clean up the done items. + +* New: [Trimester review.](roadmap_adjustment.md#trimester-review) + + The objectives of the trimester review are: + + - Identify the areas to focus on for the trimester + - Identify the tactics you want to use on those areas. + - Review the previous trimester tactics + + The objectives are not: + + - To review what you've done or why you didn't get there. + + **When to do the trimester reviews** + + As with [personal integrity review](#personal-integrity-review), it's interesting to do analysis at representative moments. It gives it an emotional weight. You can for example use the solstices or my personal version of the solstices: + + - Spring analysis (1st of March): For me the spring is the real start of the year, it's when life explodes after the stillness of the winter. The sun starts to set later enough so that you have light in the afternoons, the climate gets warmer thus inviting you to be more outside, the nature is blooming new leaves and flowers. It is then a moment to build new projects and set the current year on track. + - Summer analysis (1st of June): I hate heat, so summer is a moment of retreat. Everyone temporarily stop their lives, we go on holidays and all social projects slow their pace. Even the news have even less interesting things to report. It's so hot outside that some of us seek the cold refuge of home or remote holiday places. Days are long and people love to hang out till late, so usually you wake up later, thus having less time to actually do stuff. Even in the moments when you are alone the heat drains your energy to be productive. It is then a moment to relax and gather forces for the next trimester. It's also perfect to develop *easy* and *chill* personal projects that have been forgotten in a drawer. Lower your expectations and just flow with what your body asks you. + - Autumn analysis (1st of September): September it's another key moment for many people. We have it hardcoded in our life since we were children as it was the start of school. People feel energized after the summer holidays and are eager to get back to their lives and stopped projects. You're already 6 months into the year, so it's a good moment to review your year plan and decide how you want to invest your energy reserves. + - Winter analysis (1st of December): December is the cue that the year is coming to an end. The days grow shorter and colder, they basically invite you to enjoy a cup of tea under a blanket. It is then a good time to get into your cave and do an introspection analysis on the whole year and prepare the ground for the coming year. Some of the goals of this season are: + - Think everything you need to guarantee a good, solid and powerful spring start. + - Do the year review to adjust your principles. + + The year is then divided in two sets of an expansion trimester and a retreat one. We can use this information to adjust our life plan accordingly. In the expansion trimester we could invest more energies in the planning, and in the retreat ones we can do more throughout reviews. + + **Listen to your desires** + + The trimester review requires an analysis that doesn't fill in a day session. It requires slow thinking over some time. So I'm creating a task 10 days before the actual review to start thinking about the next trimester. Whether it's ideas, plans, desires, objectives, values, or principles. + + Is useful for that document to be available wherever you go, so that in any spare time you can pop it up and continue with the train of thought. + + Doing the reflection without seeing your life path prevents you from being tainted by it, thus representing the real you of right now. + + On the day to actually do the review, follow the steps of the [Month review prepare](#month-prepare) adjusting them to the trimester case. + + **Answer some meaningful guided questions** + + To be done, until then you can read chapters 13, 14 and the epilogue of the book Four thousand weeks by Oliver Burkman. + + **Refactor your gathered thoughts** + + If you've followed the prepare steps, you've already been making up your mind on what do you want the next trimester to look like. Now it's the time to refine those thoughts. + + In your roadmap document add a new section for the incoming trimester similar to: + + ```orgmode + * Roadmap + ** 2024 + *** Summer 2024 + **** Essential intent + **** Trimester analysis + **** Trimester objectives + ***** TODO Objective 1 + ****** TODO SubObjective 1 + ``` + Go *one by one* (don't peek!) of your gathered items and translate them in the next sections: + + - `Trimester analysis`: A text with as many paragraphs as you need to order your thoughts + - `Trimester objectives`: These can be concrete emotional projects you want to carry through. + - `Essential intent`: This is the main headline of your trimester, probably you won't be able to define it until the last parts of the review process. It should be concrete and emotional too, it's going to be the idea that gives you strength on your weak moments and your guide to decide which projects to do and which not to. + + Don't be too concerned on the format of the content of the objectives, this is the first draft, and we'll refine it through the planning. + +* New: [Wordpress plugin.](gancio.md#wordpress-plugin) + + This plugin allows you to embed a list of events or a single event from your Gancio website using a shortcode. + It also allows you to connects a Gancio instance to a your wordpress website to automatically push events published on WordPress: + for this to work an event manager plugin is required, Event Organiser and The Events Calendar are supported. Adding another plugin it’s an easy task and you have a guide available in the repo that shows you how to do it. + + The source code of the plugin is [in the wp-plugin](https://framagit.org/les/gancio/-/tree/master/wp-plugin?ref_type=heads) directory of the official repo + +### [Habit management](habit_management.md) + +* New: Introduce habit management. + + A [habit](https://en.wikipedia.org/wiki/Habit) is a routine of behavior that is repeated regularly and tends to occur subconsciously. + + A [2002 daily experience study](https://psycnet.apa.org/doiLanding?doi=10.1037%2F0022-3514.83.6.1281) found that approximately 43% of daily behaviors are performed out of habit. New behaviours can become automatic through the process of habit formation. Old habits are hard to break and new habits are hard to form because the behavioural patterns that humans repeat become imprinted in neural pathways, but it is possible to form new habits through repetition. + + When behaviors are repeated in a consistent context, there is an incremental increase in the link between the context and the action. This increases the automaticity of the behavior in that context. Features of an automatic behavior are all or some of: efficiency, lack of awareness, unintentionality, and uncontrollability. + + Mastering habit formation can be a powerful tool to change yourself. Usually with small changes you get massive outcomes in the long run. The downside is that it's not for the impatient people as it often appears to make no difference until you cross a critical threshold that unlocks a new level of performance. + +* New: [Why are habits interesting.](habit_management.md#why-are-habits-interesting) + + Whenever you face a problem repeatedly, your brain begins to automate the process of solving it. Habits are a series of automatic resolutions that solve the problems and stresses you face regularly. + + As habits are created, the level of activity in the brain decreases. You learn to lock in on the cues that predict success and tune out everything else. When a similar situation arises in the future, you know exactly what you look for. There is no longer a need to analyze every angle of a situation. Your brain skips the process of trial and error and creates a mental rule: if this, then that. + + Habit formation is incredibly useful because the conscious mind is the bottleneck of the brain. It can only pay attention to one problem at a time. Habits reduce the cognitive load and free up mental capacity, so they can be carried on with your nonconscious mind and you can allocate your attention to other tasks. + +* New: [Identity focused changes.](habit_management.md#identity-focused-changes) + + Changing our habits is challenging because we try to change the wrong thing in the wrong way. + + There are three levels at which change can occur: + + - Outcomes: Changing your results. Goals fall under this category: publishing a book, run daily + - Process: Changing your habits and systems: decluttering your desk for a better workflow, developing a meditation practice. + - Identity: Changing your beliefs, assumptions and biases: your world view, your self-image, your judgments. + + Many people begin the process of changing their habits by focusing on what they want to achieve. This leads to outcome-based habits. The alternative is to build identity-based habits. With this approach, we start by focusing on who we wish to become. + + The first path of change is doomed because maintaining behaviours that are incongruent with the self is expensive and will not last. Even if they make rational sense. Thus it's hard to change your habits if you never change the underlying beliefs that led to your past behaviour. On the other hand it's easy to find motivation once a habit has changed your identity as you may be proud of it and will be willing to maintain all the habits and systems associated with it. For example: The goal is not to read a book, but to become a reader. + + Focusing on outcomes may also bring the next problems: + + - Focusing on the results may lead you to temporal solutions. If you focus on the source of the issue at hand you may solve it with less effort and get you to a more stable one. + - Goals create an "either-or" conflict: either you achieve your goal and are successful or you fail and you are disappointed. Thus you only get a positive reward if you fulfill a goal. If you instead focus on the process rather than the result, you will be satisfied anytime your system is running. + - When your hard work is focused on a goal you may feel depleted once you meet it and that could make you loose the condition that made you meet the goal in the first place. + + Research has shown that once a person believes in a particular aspect of their identity, they are more likely to act in alignment with that belief. This of course is a double-edged sword. Identity change can be a powerful force for self-improvement. When working against you, identity change can be a curse. + +* New: [Changing your identity.](habit_management.md#changing-your-identity) + + Whatever your identity is right now, you only believe it because you have proof of it. The more evidence you have for a belief, the more strongly you will believe it. + + Your habits and systems are how you embody your identity. When you make your bed each day, you embody the identity of an organized person. The more you repeat a behaviour, the more you reinforce the identity associated with that behaviour. To the point that your self-image begins to change. The effect of one-off experiences tends to fade away while the effect of habits gets reinforced with time, which means your habits contribute most of the evidence that shapes your identity. + + Every action you take is a vote for the type of person you wish to become. This is one reason why meaningful change does not require radical change. Small habits can make a meaningful difference by providing evidence of a new identity. + + Once you start the ball rolling things become easier as building habits is a feedback loop. Your habits shape your identity, and your identity shapes your habits. + + The most practical way to change the identity is to: + + - [Decide the type of person you want to be](habit_management.md#decide-the-type-of-person-you-want-to-be) + - Prove it to yourself with small wins + + Another advantage of focusing in what type of person you want to be is that maybe the outcome you wanted to focus on is not the wisest smallest step to achieve your identity change. Thinking on the identity you want to embrace can make you think outside the box. + +* New: [Decide the type of person you want to be.](habit_management.md#decide-the-type-of-person-you-want-to-be) + + One way to decide the person you want to be is to answer big questions like: what do you want to stand for? What are your principles and values? Who do you wish to become? + + As we're more result oriented, another way is to work backwards from them to the person you want to be. Ask yourself: Who is the type of person that could get the outcome I want? + +* New: [How to change a habit.](habit_management.md#how-to-change-a-habit) + + The process of building a habit from a behaviour can be divided into four stages: + + - **Reward** is the end goal. + - **Cue** is the trigger in your brain that initiate a behaviour. It's contains the information that predicts a reward. + - **Cravings** are the motivational force fueled by the desire of the reward. Without motivation we have no reason to act. + - **Response** is the thought or action you perform to obtain the reward. The response depends on the amount of motivation you have, how much friction is associated with the behaviour and your ability to actually do it. + + If a behaviour is insufficient in any of the four stages, it will not become a habit. Eliminate the cue and your habit will never start. Reduce the craving and you won't have enough motivation to act. Make the behaviour difficult and you won't be able to do it. And if the reward fails to satisfy your desire, then you'll have no reason to do it again in the future. + + We chase rewards because they: + + - Deliver contentment. + - Satisfy your craving. + - Teach us which actions are worth remembering in the future. + + If a reward is met then it becomes associated with the cue, thus closing the habit feedback loop. + + If we keep these stages in mind then: + + - To build good habits we need to: + + - Cue: Make it obvious + - Craving: Make it attractive + - Response: Make it easy + - Reward: Make it satisfying + + - To break bad habits we need to: + + - Cue: Make it invisible + - Craving: Make it unattractive + - Response: Make it difficult + - Reward: Make it unsatisfying + +* New: [Select which habits you want to work with.](habit_management.md#select-which-habits-you-want-to-work-with) + + Our responses to the cues are so deeply encoded that it may feel like the urge to act comes from nowhere. For this reason, we must begin the process of behavior change with awareness. Before we can effectively build new habits, we need to get a handle on our current ones. The author suggests to do a list of your daily habits and rate them positively, negatively or neutral under the judgement of whether it brings you closer to the desired person you want to be. + + I find this approach expensive time-wise if you already have a huge list of habits to work with. As it's my case I'll skip this part. You can read it in more detail in the chapter "4: The Man Who Didn't Look Right". + +* New: [Working with the habit cues.](habit_management.md#working-with-the-habit-cues) + + The first place to start the habit design is to understand and tweak the triggers that produce them. We'll do it by: + + - [Clearly formulating the habits to change](habit_management.md#clearly-formulate-the-habit-you-want-to-change) + - [Stacking habits](habit_management.md#habit-stacking) + - [Using the environment to tweak your cues](habit_management.md#use-the-environment-to-tweak-your-cues) + +* New: [Clearly formulate the habit you want to change.](habit_management.md#clearly-formulate-the-habit-you-want-to-change) + + The cues that can trigger an habit can come in a wide range of forms but the two most common are time and location. Being specific about what you want and how you will achieve it helps you say no to things that derail progress, distract your attention and pull you off course. And with enough repetition, you will get the urge to do the right thing at the right time, even if you can't say why. That's why it's interesting to formulate your habits as "I will [behaviour] at [time] in [location]". + + You want the cue to be highly specific and immediately actionable. If there is room for doubt the implementation will suffer. Continuously refine the habit definitions as you catch the exceptions that drift you off. + + If you aren't sure of when to start your habit, try the first day of the week, month or year. People are more likely to take action at those times because hope is usually higher as you get the feeling of a fresh start. + +* New: [Habit stacking.](habit_management.md#habit-stacking) + + Many behaviours are linked together where the action of the first is the cue that triggers the next one. You can use this connection to build new habits based on your established ones. This may be called habit stacking. The formulation in this case is "After [current habit], I will [new habit]". + + The key is to tie your desired behaviour into something you already do each day. Once you have mastered this basic structure, you can begin to create larger stacks by chaining small habits together. The catch is that the new habit should have the same frequency as the established one. + + One way to find the right trigger for your habit stack is by brainstorming over: + + - The list of your current habits. + - A new list of things that always happen to you with that frequency. + + With these two lists, you can begin searching for the best triggers for the stack. + +* New: [Use the environment to tweak your cues.](habit_management.md#use-the-environment-to-tweak-your-cues) + + The cues that trigger a habit can start out very specific, but over time your habits become associated not with a single trigger but with the entire context surrounding the behaviour. This stacks over itself and your habits change depending on the room you are in and the cues in front of you. The context or the environment is then the invisible hand that shapes behaviours. They are not defined by the objects in the environment but by our relationship to them. + + A new environment is a good foundation to make new habits, as you are free from the subtle triggers that nudge you toward your current habits. When you can't manage to get an entirely new environment, you can redefine or rearrange your current one. + + When building good habits you can rearrange the environment to create obvious visual cues that draw your attention towards the desired habit. By sprinkling triggers throughout your surroundings, you increase the odds that you'll think about your habit throughout the day. + + Once a habit has been encoded, the urge to act follows whenever the environmental cues reappear. This is why bad habits reinforce themselves. As you carry through the behaviour you spiral into a situation where the craving keeps growing and points you to keep on going with the same response. For example watching TV makes you feel sluggish, so you watch more television because you don't have the energy to do anything else. + + Even if you manage to break a habit, you are unlikely to forget it's cues even if you don't do it for a while. That means that simply resisting temptation is an ineffective strategy. In the short run it may work. In the long run, as self-control is an exhausting task that consumes willpower, we become a product of the environment we live in. Trying to change a habit with self-control is doomed to fail as you may be able to resist temptation once or twice, but it's unlikely you can muster the willpower to override your desires every time. It's also very hard and frustrating to try to achieve change when you're under the mood influences of a bad habit. + + A more reliable approach is to cut bad habits off at the source. Tweak the environment to make the cue virtually impossible to happen. That way you won't even have the chance to fall for the craving. + +* New: [Temptation bundling.](habit_management.md#temptation-bundling) + + Dopamine is a neurotransmitter that can be used as the scientific measurement of craving. For years we assumed that it was all about pleasure, but now we know it plays a central role in many neurological processes, including motivation, learning and memory, punishment and aversion and voluntary movement. + + Habits are a dopamine-driven feed back loop. It is released not only when you receive a reward but also when you anticipate it. This anticipation, and not the fulfillment of it, is what gets us to take action. + + If we make a habit more attractive it will release more dopamine which will gives us more motivation to carry it through. + + Temptation bundling works by pairing an action you want to do with an action you need to do. You're more likely to find a behaviour attractive if you get to do one of your favourite things at the same time. In the end you may even look forward to do the habit you need as it's related to the habit you want. + +* New: [Align your personal identity change with an existent shared identity.](habit_management.md#align-your-personal-identity-change-with-an-existent-shared-identity) + + We pick up habits from the people around us. As a general rule, the closer we are to someone, the more likely we are to imitate some of their habits. One of the most effective things you can do to build better habits is to join a culture where your desired behaviour is the normal one. This transforms your personal identity transformation into the building of a shared one. Shared identities have great benefits over single ones: + + - They foster belonging. A powerful feeling that creates motivation. + - They are more resilient: When one falters others will take their place so all together you'll guarantee the maintenance of the identity. + - They create friendship and community + - They expose you to an environment where more habits tied to that identity thrive. + + Likewise, if you're trying to run from a bad habit cut your ties to communities that embrace that habit. + +* New: [Track your habit management.](habit_management.md#track-your-habit-management) + + You can have a `habits.org` file where you prioritize, analyze, track them. + + I'm using the next headings: + + * *Habits being implemented*: It's subdivided in two: + * Habits that need attention + * Habits that don't need attention + * *Unclassified habits*: Useful when refiling habits from your inbox. This list will be analyzedwhen you do habit analysis. + * *Backlog of habits*: Unrefined and unordered list of habits + * Implemented habits: + * Rejected habits: + + Each habit is a `TODO` item with the usual states: `TODO`, `DOING`, `DONE`, `REJECTED`. In it's body I keep a log of the evolution and the analysis of the habit. + +* New: [Habit management workflow.](habit_management.md#habit-management-workflow) + + Each month I'm trying to go through the list of habits to: + + - Update the state of the habits: Some will be done, rejected or to register ideas about them. + - Decide which ones need attention. + - Do habit analysis on the ones that need attention. + + For each of the habits that need analysis, apply the learnings of the next sections: + + - [Working with the habit cues](habit_management.md#working-with-the-habit-cues) + - [Working with the habit cravings ](habit_management.md#working-with-the-habit-cravings ) + - [Working with the habit responses ](habit_management.md#working-with-the-habit-responses ) + - [Working with the habit rewards](habit_management.md#working-with-the-habit-rewards ) + + +### [Calendar management](calendar_management.md) + +* New: [Add calendar event notification system tool.](calendar_management.md#calendar-event-notification-system) + + Set up a system that notifies you when the next calendar event is about to start to avoid spending mental load on it and to reduce the possibilities of missing the event. + + I've created a small tool that: + + - Tells me the number of [pomodoros](task_tools.md#pomodoro) that I have until the next event. + - Once a pomodoro finishes it makes me focus on the amount left so that I can prepare for the event + - Catches my attention when the event is starting. + +## Life chores management + +### [himalaya](grocy_management.md) + +* New: [Doing the inventory review.](grocy_management.md#doing-the-inventory-review) + + I haven't found a way to make the grocy inventory match the reality because for me it's hard to register when I consume a product. Even more if other people also use them. Therefore I use grocy only to know what to buy without thinking about it. For that use case the inventory needs to meet reality only before doing the groceries. I usually do a big shopping of non-perishable goods at the supermarket once each two or three months, and a weekly shopping of the rest. + + Tracking the goods that are bought each week makes no sense as those are things that are clearly seen and are very variable depending on the season. Once I've automated the ingestion and consumption of products it will, but so far it would mean investing more time than the benefit it gives. + + This doesn't apply to the big shopping, as this one is done infrequently, so we need a better planning. + + To do the inventory review I use a tablet and the [android app](https://github.com/patzly/grocy-android). + + - [ ] Open the stock overview and iterate through the locations to: + - [ ] Make sure that the number of products match the reality + - [ ] Iterate over the list of products checking the quantity + - [ ] Look at the location to see if there are missing products in the inventory + - [ ] Adjust the product properties (default location, minimum amount) + - [ ] Check the resulting shopping list and adjust the minimum values. + - [ ] Check the list of missing products to adjust the minimum values. I have a notepad in the fridge where I write the things I miss. + +* New: Introduce route management. + + To analyze which hiking routes are available in a zone I'm following the next process + + - [ ] Check in my `trips` orgmode directory to see if the zone has already been indexed. + - [ ] Do a first search of routes + - [ ] Identify which books or magazines describe the zone + - [ ] For each of the described routes in each of these books: + - [ ] Create the `Routes` section with tag `:route:` if it doesn't exist + - [ ] Fill up the route form in a `TODO` heading. Something similar to: + ~~~ + Reference: Book Page + Source: Where does it start + Distance: X km + Slope: X m + Type: [Lineal/Circular/Semi-lineal] + Difficulty: + Track: URL (only if you don't have to search for it) + ~~~ + - [ ] Add tags of the people I'd like to do it with + - [ ] Put a postit on the book/magazine if it's likely I'm going to do it + - [ ] Open a web maps tab with the source of the route to calculate the time from the different lodgins + - [ ] If there are not enough, repeat the process above for each of your online route reference blogs + + - [ ] Choose the routes to do + - [ ] Show the gathered routes to the people you want to go with + - [ ] Select which ones you'll be more likely to do + + - [ ] For each of the chosen routes + - [ ] Search the track in wikiloc if it's missing + - [ ] Import the track in [OsmAnd+](osmand.md) + +* New: Add API and python library docs. + + There is no active python library, although it existed [pygrocy](https://github.com/SebRut/pygrocy) + + * [API Docs](https://demo.grocy.info/api) + +* New: Introduce himalaya. + + [himalaya](https://github.com/pimalaya/himalaya) is a Rust CLI to manage emails. + + Features: + + - Multi-accounting + - Interactive configuration via **wizard** (requires `wizard` feature) + - Mailbox, envelope, message and flag management + - Message composition based on `$EDITOR` + - **IMAP** backend (requires `imap` feature) + - **Maildir** backend (requires `maildir` feature) + - **Notmuch** backend (requires `notmuch` feature) + - **SMTP** backend (requires `smtp` feature) + - **Sendmail** backend (requires `sendmail` feature) + - Global system **keyring** for managing secrets (requires `keyring` feature) + - **OAuth 2.0** authorization (requires `oauth2` feature) + - **JSON** output via `--output json` + - **PGP** encryption: + - via shell commands (requires `pgp-commands` feature) + - via [GPG](https://www.gnupg.org/) bindings (requires `pgp-gpg` feature) + - via native implementation (requires `pgp-native` feature) + + Cons: + + - Documentation is inexistent, you have to dive into the `--help` to understand stuff. + + **[Installation](https://github.com/pimalaya/himalaya)** + + *The `v1.0.0` is currently being tested on the `master` branch, and is the prefered version to use. Previous versions (including GitHub beta releases and repositories published versions) are not recommended.* + + Himalaya CLI `v1.0.0` can be installed with a pre-built binary. Find the latest [`pre-release`](https://github.com/pimalaya/himalaya/actions/workflows/pre-release.yml) GitHub workflow and look for the *Artifacts* section. You should find a pre-built binary matching your OS. + + Himalaya CLI `v1.0.0` can also be installed with [cargo](https://doc.rust-lang.org/cargo/): + + ```bash + $ cargo install --git https://github.com/pimalaya/himalaya.git --force himalaya + ``` + **[Configuration](https://github.com/pimalaya/himalaya?tab=readme-ov-file#configuration)** + + Just run `himalaya`, the wizard will help you to configure your default account. + + You can also manually edit your own configuration, from scratch: + + - Copy the content of the documented [`./config.sample.toml`](https://github.com/pimalaya/himalaya/blob/master/config.sample.toml) + - Paste it in a new file `~/.config/himalaya/config.toml` + - Edit, then comment or uncomment the options you want + + **If using mbrsync** + + My generic configuration for an mbrsync account is: + + ``` + [accounts.account_name] + + email = "lyz@example.org" + display-name = "lyz" + envelope.list.table.unseen-char = "u" + envelope.list.table.replied-char = "r" + backend.type = "maildir" + backend.root-dir = "/home/lyz/.local/share/mail/lyz-example" + backend.maildirpp = false + message.send.backend.type = "smtp" + message.send.backend.host = "example.org" + message.send.backend.port = 587 + message.send.backend.encryption = "start-tls" + message.send.backend.login = "lyz" + message.send.backend.auth.type = "password" + message.send.backend.auth.command = "pass show mail/lyz.example" + ``` + + Once you've set it then you need to [fix the INBOX directory](#cannot-find-maildir-matching-name-inbox). + + Then you can check if it works by running `himalaya envelopes list -a lyz-example` + + **Vim plugin installation** + + Using lazy: + + ```lua + return { + { + "pimalaya/himalaya-vim", + }, + } + ``` + + You can then run `:Himalaya account_name` and it will open himalaya in your editor. + + **Configure the account bindings** + + To avoid typing `:Himalaya account_name` each time you want to check the email you can set some bindings: + + ```lua + return { + { + "pimalaya/himalaya-vim", + keys = { + { "ma", "Himalaya account_name", desc = "Open account_name@example.org" }, + { "ml", "Himalaya lyz", desc = "Open lyz@example.org" }, + }, + }, + } + ``` + + Setting the description is useful to see the configured accounts with which-key by typing `m` and waiting. + + **Configure extra bindings** + + The default plugin doesn't yet have all the bindings I'd like so I've added the next ones: + + - In the list of emails view: + - `dd` in normal mode or `d` in visual: Delete emails + - `q`: exit the program + + - In the email view: + - `d`: Delete email + - `q`: Return to the list of emails view + + If you want them too set the next config: + + ```lua + return { + { + "pimalaya/himalaya-vim", + config = function() + vim.api.nvim_create_augroup("HimalayaCustomBindings", { clear = true }) + vim.api.nvim_create_autocmd("FileType", { + group = "HimalayaCustomBindings", + pattern = "himalaya-email-listing", + callback = function() + -- Bindings to delete emails + vim.api.nvim_buf_set_keymap(0, "n", "dd", "(himalaya-email-delete)", { noremap = true, silent = true }) + vim.api.nvim_buf_set_keymap(0, "x", "d", "(himalaya-email-delete)", { noremap = true, silent = true }) + -- Bind `q` to close the window + vim.api.nvim_buf_set_keymap(0, "n", "q", ":bd", { noremap = true, silent = true }) + end, + }) + + vim.api.nvim_create_augroup("HimalayaEmailCustomBindings", { clear = true }) + vim.api.nvim_create_autocmd("FileType", { + group = "HimalayaEmailCustomBindings", + pattern = "mail", + callback = function() + -- Bind `q` to close the window + vim.api.nvim_buf_set_keymap(0, "n", "q", ":q", { noremap = true, silent = true }) + -- Bind `d` to delete the email and close the window + vim.api.nvim_buf_set_keymap( + 0, + "n", + "d", + "(himalaya-email-delete):q", + { noremap = true, silent = true } + ) + end, + }) + end, + }, + } + ``` + + **Configure email fetching from within vim** + + [Fetching emails from within vim](https://github.com/pimalaya/himalaya-vim/issues/13) is not yet supported, so I'm manually refreshing by account: + + ```lua + return { + { + "pimalaya/himalaya-vim", + keys = { + -- Email refreshing bindings + { "rj", ':lua FetchEmails("lyz")', desc = "Fetch lyz@example.org" }, + }, + config = function() + function FetchEmails(account) + vim.notify("Fetching emails for " .. account .. ", please wait...", vim.log.levels.INFO) + vim.cmd("redraw") + vim.fn.jobstart("mbsync " .. account, { + on_exit = function(_, exit_code, _) + if exit_code == 0 then + vim.notify("Emails for " .. account .. " fetched successfully!", vim.log.levels.INFO) + else + vim.notify("Failed to fetch emails for " .. account .. ". Check the logs.", vim.log.levels.ERROR) + end + end, + }) + end + end, + }, + } + ``` + + You still need to open again `:Himalaya account_name` as the plugin does not reload if there are new emails. + + **Show notifications when emails arrive** + + You can set up [mirador](mirador.md) to get those notifications. + + **Not there yet** + + - [With the vim plugin you can't switch accounts](https://github.com/pimalaya/himalaya-vim/issues/8) + - [Let the user delete emails without confirmation](https://github.com/pimalaya/himalaya-vim/issues/12) + - [Fetching emails from within vim](https://github.com/pimalaya/himalaya-vim/issues/13) + + **Troubleshooting** + + **[Cannot find maildir matching name INBOX](https://github.com/pimalaya/himalaya/issues/490)** + + `mbrsync` uses `Inbox` instead of the default `INBOX` so it doesn't find it. In theory you can use `folder.alias.inbox = "Inbox"` but it didn't work with me, so I finally ended up doing a symbolic link from `INBOX` to `Inbox`. + + **Cannot find maildir matching name Trash** + + That's because the `Trash` directory does not follow the Maildir structure. I had to create the `cur` `tmp` and `new` directories. + + **References** + - [Source](https://github.com/pimalaya/himalaya) + - [Vim plugin source](https://github.com/pimalaya/himalaya-vim) + +* New: Introduce mailbox. + + [`mailbox`](https://docs.python.org/3/library/mailbox.html) is a python library to work with MailDir and mbox local mailboxes. + + It's part of the core python libraries, so you don't need to install anything. + + **Usage** + + The docs are not very pleasant to read, so I got most of the usage knowledge from these sources: + + - [pymowt docs](https://pymotw.com/2/mailbox/) + - [Cleanup maildir directories](https://cr-net.be/posts/maildir_cleanup_with_python/) + - [Parsing maildir directories](https://gist.github.com/tyndyll/6f6145f8b1e82d8b0ad8) + + One thing to keep in mind is that an account can have many mailboxes (INBOX, Sent, ...), there is no "root mailbox" that contains all of the other + + **initialise a mailbox** + + ```python + mbox = mailbox.Maildir('path/to/your/mailbox') + ``` + + Where the `path/to/your/mailbox` is the directory that contains the `cur`, `new`, and `tmp` directories. + + **Working with mailboxes** + + It's not very clear how to work with them, the Maildir mailbox contains the emails in iterators `[m for m in mbox]`, it acts kind of a dictionary, you can get the keys of the emails with `[k for k in mbox.iterkeys]`, and then you can `mbox[key]` to get an email, you cannot modify those emails (flags, subdir, ...) directly in the `mbox` object (for example `mbox[key].set_flags('P')` doesn't work). You need to `mail = mbox.pop(key)`, do the changes in the `mail` object and then `mbox.add(mail)` it again, with the downside that after you added it again, the `key` has changed! But it's the return value of the `add` method. + + If the program gets interrupted between the `pop` and the `add` then you'll loose the email. The best way to work with it would be then: + + - `mail = mbox.get(key)` the email + - Do all the process you need to do with the email + - `mbox.pop(key)` and `key = mbox.add(mail)` + + In theory `mbox` has an `update` method that does this, but I don't understand it and it doesn't work as expected :S. + + **Moving emails around** + + You can't just move the files between directories like you'd do with python as each directory contains it's own identifiers. + + **Moving a message between the maildir directories** + + The `Message` has a `set_subdir` + + **[Creating folders](https://pymotw.com/2/mailbox/#maildir-folders)** + + Even though you can create folders with `mailbox` it creates them in a way that mbsync doesn't understand it. It's easier to manually create the `cur`, `tmp`, and `new` directories. I'm using the next function: + + ```python + if not (mailbox_dir / "cur").exists(): + for dir in ["cur", "tmp", "new"]: + (mailbox_dir / dir).mkdir(parents=True) + log.info(f"Initialized mailbox: {mailbox}") + else: + log.debug(f"{mailbox} already exists") + ``` + + **References** + + - [Reference Docs](https://docs.python.org/3/library/mailbox.html) + - [Non official useful docs](https://pymotw.com/2/mailbox/) + +* New: Introduce maildir. + + The [Maildir](https://en.wikipedia.org/wiki/Maildir) e-mail format is a common way of storing email messages on a file system, rather than in a database. Each message is assigned a file with a unique name, and each mail folder is a file system directory containing these files. + + A Maildir directory (often named Maildir) usually has three subdirectories named `tmp`, `new`, and `cur`. + + - The `tmp` subdirectory temporarily stores e-mail messages that are in the process of being delivered. This subdirectory may also store other kinds of temporary files. + - The `new` subdirectory stores messages that have been delivered, but have not yet been seen by any mail application. + - The `cur` subdirectory stores messages that have already been seen by mail applications. + + **References** + + - [Wikipedia](https://en.wikipedia.org/wiki/Maildir) + +* New: [My emails are not being deleted on the source IMAP server.](mbsync.md#my-emails-are-not-being-deleted-on-the-source-imap-server) + + That's the default behavior of `mbsync`, if you want it to actually delete the emails on the source you need to add: + + ``` + Expunge Both + ``` + Under your channel (close to `Sync All`, `Create Both`) + +* New: [Mbsync error: UID is beyond highest assigned UID.](mbsync.md#mbsync-error:-uid-is-beyond-highest-assigned-uid) + + If during the sync you receive the following errors: + + ``` + mbsync error: UID is 3 beyond highest assigned UID 1 + ``` + + Go to the place where `mbsync` is storing the emails and find the file that is giving the error, you need to find the files that contain `U=3`, imagine that it's something like `1568901502.26338_1.hostname,U=3:2,S`. You can strip off everything from the `,U=` from that filename and resync and it should be fine, e.g. + + ```bash + mv '1568901502.26338_1.hostname,U=3:2,S' '1568901502.26338_1.hostname' + ``` + feat(mirador): introduce mirador + + DEPRECATED: as of 2024-11-15 the tool has many errors ([1](https://github.com/pimalaya/mirador/issues/4), [2](https://github.com/pimalaya/mirador/issues/3)), few stars (4) and few commits (8). use [watchdog](watchdog_python.md) instead and build your own solution. + + [mirador](https://github.com/pimalaya/mirador) is a CLI to watch mailbox changes made by the maintaner of [himalaya](himalaya.md). + + Features: + + - Watches and executes actions on mailbox changes + - Interactive configuration via **wizard** (requires `wizard` feature) + - Supported events: **on message added**. + - Supported actions: **send system notification**, **execute shell command**. + - Supports **IMAP** mailboxes (requires `imap` feature) + - Supports **Maildir** folders (requires `maildir` feature) + - Supports global system **keyring** to manage secrets (requires `keyring` feature) + - Supports **OAuth 2.0** (requires `oauth2` feature) + + *Mirador CLI is written in [Rust](https://www.rust-lang.org/), and relies on [cargo features](https://doc.rust-lang.org/cargo/reference/features.html) to enable or disable functionalities. Default features can be found in the `features` section of the [`Cargo.toml`](https://github.com/pimalaya/mirador/blob/master/Cargo.toml#L18).* + + **[Installation](https://github.com/pimalaya/mirador)** + + The `v1.0.0` is currently being tested on the `master` branch, and is the preferred version to use. Previous versions (including GitHub beta releases and repositories published versions) are not recommended.* + + **Cargo (git)** + + Mirador CLI `v1.0.0` can also be installed with [cargo](https://doc.rust-lang.org/cargo/): + + ```bash + $ cargo install --frozen --force --git https://github.com/pimalaya/mirador.git + ``` + + **Pre-built binary** + + Mirador CLI `v1.0.0` can be installed with a pre-built binary. Find the latest [`pre-release`](https://github.com/pimalaya/mirador/actions/workflows/pre-release.yml) GitHub workflow and look for the *Artifacts* section. You should find a pre-built binary matching your OS. + + **Configuration** + + Just run `mirador`, the wizard will help you to configure your default account. + + You can also manually edit your own configuration, from scratch: + + - Copy the content of the documented [`./config.sample.toml`](https://github.com/pimalaya/mirador/blob/master/config.sample.toml) + - Paste it in a new file `~/.config/mirador/config.toml` + - Edit, then comment or uncomment the options you want + + - [Source](https://github.com/pimalaya/mirador) + +* New: [Configure navigation bindings.](himalaya.md#configure-navigation-bindings) + + The default bindings conflict with my git bindings, and to make them similar to orgmode agenda I'm changing the next and previous page: + + ```lua + return { + { + "pimalaya/himalaya-vim", + keys = { + { "b", "(himalaya-folder-select-previous-page)", desc = "Go to the previous email page" }, + { "f", "(himalaya-folder-select-next-page)", desc = "Go to the next email page" }, + }, + }, + } + + ``` + +* Correction: [Configure the account bindings.](himalaya.md#configure-the-account-bindings) +* Correction: Tweak the bindings. + + Move forward and backwards in the history of emails: + + ```lua + vim.api.nvim_create_autocmd("FileType", { + group = "HimalayaCustomBindings", + pattern = "himalaya-email-listing", + callback = function() + vim.api.nvim_buf_set_keymap(0, "n", "b", "(himalaya-folder-select-previous-page)", { noremap = true, silent = true }) + vim.api.nvim_buf_set_keymap(0, "n", "f", "(himalaya-folder-select-next-page)", { noremap = true, silent = true }) + end, + }) + ``` + Better bindings for the email list view: + + ```lua + -- Refresh emails + vim.api.nvim_buf_set_keymap(0, "n", "r", ":lua FetchEmails()", { noremap = true, silent = true }) + -- Email list view bindings + vim.api.nvim_buf_set_keymap(0, "n", "b", "(himalaya-folder-select-previous-page)", { noremap = true, silent = true }) + vim.api.nvim_buf_set_keymap(0, "n", "f", "(himalaya-folder-select-next-page)", { noremap = true, silent = true }) + vim.api.nvim_buf_set_keymap(0, "n", "R", "(himalaya-email-reply-all)", { noremap = true, silent = true }) + vim.api.nvim_buf_set_keymap(0, "n", "F", "(himalaya-email-forward)", { noremap = true, silent = true }) + vim.api.nvim_buf_set_keymap(0, "n", "m", "(himalaya-folder-select)", { noremap = true, silent = true }) + vim.api.nvim_buf_set_keymap(0, "n", "M", "(himalaya-email-move)", { noremap = true, silent = true }) + ``` + feat(himalaya#Searching emails): Searching emails + + You can use the `g/` binding from within nvim to search for emails. The query syntax supports filtering and sorting query. + + I've tried changing it to `/` without success :'( + + **Filters** + + A filter query is composed of operators and conditions. There is 3 operators and 8 conditions: + + - `not `: filter envelopes that do not match the condition + - ` and `: filter envelopes that match both conditions + - ` or `: filter envelopes that match one of the conditions + - `date `: filter envelopes that match the given date + - `before `: filter envelopes with date strictly before the given one + - `after `: filter envelopes with date stricly after the given one + - `from `: filter envelopes with senders matching the given pattern + - `to `: filter envelopes with recipients matching the given pattern + - `subject `: filter envelopes with subject matching the given pattern + - `body `: filter envelopes with text bodies matching the given pattern + - `flag `: filter envelopes matching the given flag + + **Sorting** + A sort query starts by "order by", and is composed of kinds and orders. There is 4 kinds and 2 orders: + + - `date [order]`: sort envelopes by date + - `from [order]`: sort envelopes by sender + - `to [order]`: sort envelopes by recipient + - `subject [order]`: sort envelopes by subject + - ` asc`: sort envelopes by the given kind in ascending order + - ` desc`: sort envelopes by the given kind in descending order + + **Examples** + + `subject foo and body bar`: filter envelopes containing "foo" in their subject and "bar" in their text bodies + `order by date desc subject`: sort envelopes by descending date (most recent first), then by ascending subject + `subject foo and body bar order by date desc subject`: combination of the 2 previous examples + +* New: [List more detected issues.](himalaya.md#not-there-yet) + + - [Replying an email doesn't mark it as replied](https://github.com/pimalaya/himalaya-vim/issues/14) + +* New: [Troubleshoot cannot install the program.](himalaya.md#cannot-install) + + Sometimes [the installation steps fail](https://github.com/pimalaya/himalaya/issues/513) as it's still not in stable. A workaround is to download the binary created by the [pre-release CI](https://github.com/pimalaya/himalaya/actions/workflows/pre-releases.yml). You can do it by: + + - Click on the latest job + - Click on jobs + - Click on the job of your architecture + - Click on "Upload release" + - Search for "Artifact download URL" and download the file + - Unpack it and add it somewhere in your `$PATH` + +### [alot](beancount.md) + +* New: [Comments.](beancount.md#comments) + + Any text on a line after the character `;` is ignored, text like this: + + ```beancount + ; I paid and left the taxi, forgot to take change, it was cold. + 2015-01-01 * "Taxi home from concert in Brooklyn" + Assets:Cash -20 USD ; inline comment + Expenses:Taxi + ``` + +* New: Introduce matrix_highlight. + + [Matrix Highlight](https://github.com/DanilaFe/matrix-highlight) is a decentralized and federated way of annotating the web based on Matrix. + + Think of it as an open source alternative to [hypothesis](hypothesis.md). + + It's similar to [Populus](https://github.com/opentower/populus-viewer) but for the web. + + I want to try it and investigate further specially if you can: + + - Easily extract the annotations + - Activate it by default everywhere + +* New: [Get the schema of a table.](sql.md#get-the-schema-of-a-table) + + [Postgres](https://stackoverflow.com/questions/25639088/show-table-structure-and-list-of-tables-in-postgresql): + + ``` + \d+ table_name + ``` + +* New: [Get the last row of a table.](sql.md#get-the-last-row-of-a-table-) + + ```sql + SELECT * FROM Table ORDER BY ID DESC LIMIT 1 + ``` + +* New: Introduce beanSQL. + + [bean-sql](https://beancount.github.io/docs/beancount_query_language.html#introduction) is a language to query [`beancount`](beancount.md) data. + + References: + - [Docs](https://beancount.github.io/docs/beancount_query_language.html#introduction) + - [Examples](https://aumayr.github.io/beancount-sql-queries/) + +* New: [Get the quarter of a date.](bean_sql.md#get-the-quarter-of-a-date) + + Use the `quarter(date)` selector in the `SELECT` . For example: + + ```sql + SELECT quarter(date) as quarter, sum(position) AS value + WHERE + account ~ '^Expenses:' OR + account ~ '^Income:' + GROUP BY quarter + ``` + + It will return the quarter in the format `YYYY-QX`. + +* New: [Building your own dashboards.](beancount.md#building-your-own-dashboards) + + I was wondering whether to create [fava dashboards](fava_dashboards.md) or to create them directly in [grafana](grafana.md). + + Pros of fava dashboards: + - They are integrated in fava so it would be easy to browse other beancount data. Although this could be done as well in another window if I used grafana. + - There is no need to [create the beancount grafana data source logic](https://groups.google.com/g/beancount/c/R3C9c-BPOGI). + - It's already a working project, I would need just to tweak an existent example. + + Cons: + - I may need to learn echarts and write JavaScript to tweak some of the dashboards. + - I wouldn't have all my dashboards in the same place. + - It only solves part of the problem, I'd still need to write the [bean-sql queries](bean_sql.md). But using beanql is probably the best way to extract data from beancount anyway. + - It involves more magic than using grafana. + - grafana dashboards are prettier. + - I wouldn't use the grafana knowledge. + - I'd learn a new tool only to use it here instead of taking the chance to improve my grafana skillset. + + I'm going to try with [fava dashboards](fava_dashboards.md) and see how it goes + +* Correction: Deprecate in favour of himalaya. + + DEPRECATED: Use [himalaya](himalaya.md) instead. + +* New: [Automatically sync emails.](email_automation.md#automatically-sync-emails) + + I have many emails, and I want to fetch them with different frequencies, in the background and be notified if anything goes wrong. + + For that purpose I've created a python script, a systemd service and some loki rules to monitor it. + + **Script to sync emails and calendars with different frequencies** + + The script iterates over the configured accounts in `accounts_config` and runs `mbsync` for email accounts and `vdirsyncer` for email accounts based on some cron expressions. It logs the output in `logfmt` format so that it's easily handled by [loki](loki.md) + + To run it you'll first need to create a virtualenv, I use `mkvirtualenv account_syncer` which creates a virtualenv in `~/.local/share/virtualenv/account_syncer`. + + Then install the dependencies: + + ```bash + pip install aiocron + ``` + + Then place this script somewhere, for example (`~/.local/bin/account_syncer.py`) + + ```python + import asyncio + import logging + from datetime import datetime + import asyncio.subprocess + import aiocron + + accounts_config = { + "emails": [ + { + "account_name": "lyz", + "cron_expressions": ["*/15 9-23 * * *"], + }, + { + "account_name": "work", + "cron_expressions": ["*/60 8-17 * * 1-5"], # Monday-Friday + }, + { + "account_name": "monitorization", + "cron_expressions": ["*/5 * * * *"], + }, + ], + "calendars": [ + { + "account_name": "lyz", + "cron_expressions": ["*/15 9-23 * * *"], + }, + { + "account_name": "work", + "cron_expressions": ["*/60 8-17 * * 1-5"], # Monday-Friday + }, + ], + } + + class LogfmtFormatter(logging.Formatter): + """Custom formatter to output logs in logfmt style.""" + + def format(self, record: logging.LogRecord) -> str: + log_message = ( + f"level={record.levelname.lower()} " + f"logger={record.name} " + f'msg="{record.getMessage()}"' + ) + return log_message + + def setup_logging(logging_name: str) -> logging.Logger: + """Configure logging to use logfmt format. + Args: + logging_name (str): The logger's name and identifier in the systemd journal. + Returns: + Logger: The configured logger. + """ + console_handler = logging.StreamHandler() + logfmt_formatter = LogfmtFormatter() + console_handler.setFormatter(logfmt_formatter) + logger = logging.getLogger(logging_name) + logger.setLevel(logging.INFO) + logger.addHandler(console_handler) + return logger + + log = setup_logging("account_syncer") + + async def run_mbsync(account_name: str) -> None: + """Run mbsync command asynchronously for email accounts. + + Args: + account_name (str): The name of the email account to sync. + """ + command = f"mbsync {account_name}" + log.info(f"Syncing emails for {account_name}...") + process = await asyncio.create_subprocess_shell( + command, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE + ) + stdout, stderr = await process.communicate() + if stdout: + log.info(f"Output for {account_name}: {stdout.decode()}") + if stderr: + log.error(f"Error for {account_name}: {stderr.decode()}") + + async def run_vdirsyncer(account_name: str) -> None: + """Run vdirsyncer command asynchronously for calendar accounts. + + Args: + account_name (str): The name of the calendar account to sync. + """ + command = f"vdirsyncer sync {account_name}" + log.info(f"Syncing calendar for {account_name}...") + process = await asyncio.create_subprocess_shell( + command, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE + ) + _, stderr = await process.communicate() + if stderr: + command_log = stderr.decode().strip() + if "error" in command_log or "critical" in command_log: + log.error(f"Output for {account_name}: {command_log}") + elif len(command_log.splitlines()) > 1: + log.info(f"Output for {account_name}: {command_log}") + + def should_i_sync_today(cron_expr: str) -> bool: + """Check if the current time matches the cron expression day and hour constraints.""" + _, hour, _, _, day_of_week = cron_expr.split() + now = datetime.now() + if "*" in hour: + return True + elif not (int(hour.split("-")[0]) <= now.hour <= int(hour.split("-")[1])): + return False + if day_of_week != "*" and str(now.weekday()) not in day_of_week.split(","): + return False + return True + + async def main(): + log.info("Starting account syncer for emails and calendars") + accounts_to_sync = {"emails": [], "calendars": []} + + # Schedule email accounts + for account in accounts_config["emails"]: + account_name = account["account_name"] + for cron_expression in account["cron_expressions"]: + if ( + should_i_sync_today(cron_expression) + and account_name not in accounts_to_sync["emails"] + ): + accounts_to_sync["emails"].append(account_name) + aiocron.crontab(cron_expression, func=run_mbsync, args=[account_name]) + log.info( + f"Scheduled mbsync for {account_name} with cron expression: {cron_expression}" + ) + + # Schedule calendar accounts + for account in accounts_config["calendars"]: + account_name = account["account_name"] + for cron_expression in account["cron_expressions"]: + if ( + should_i_sync_today(cron_expression) + and account_name not in accounts_to_sync["calendars"] + ): + accounts_to_sync["calendars"].append(account_name) + aiocron.crontab(cron_expression, func=run_vdirsyncer, args=[account_name]) + log.info( + f"Scheduled vdirsyncer for {account_name} with cron expression: {cron_expression}" + ) + + log.info("Running an initial fetch on today's accounts") + for account_name in accounts_to_sync["emails"]: + await run_mbsync(account_name) + for account_name in accounts_to_sync["calendars"]: + await run_vdirsyncer(account_name) + + log.info("Finished loading accounts") + while True: + await asyncio.sleep(60) + + if __name__ == "__main__": + asyncio.run(main()) + ``` + + Where: + + - `accounts_config`: Holds your account configuration. Each account must contain an `account_name` which should be the name of the `mbsync` or `vdirsyncer` profile, and `cron_expressions` must be a list of cron valid expressions you want the email to be synced. + + **Create the systemd service** + + We're using a non-root systemd service. You can follow [these instructions](linux_snippets.md#create-a-systemd-service-for-a-non-root-user) to configure this service: + + ```ini + [Unit] + Description=Account Sync Service for emails and calendars + After=graphical-session.target + + [Service] + Type=simple + ExecStart=/home/lyz/.local/share/virtualenvs/account_syncer/bin/python /home/lyz/.local/bin/ + WorkingDirectory=/home/lyz/.local/bin + Restart=on-failure + StandardOutput=journal + StandardError=journal + SyslogIdentifier=account_syncer + Environment="PATH=/home/lyz/.local/share/virtualenvs/account_syncer/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" + Environment="DISPLAY=:0" + Environment="DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus" + + [Install] + WantedBy=graphical-session.target + ``` + + Remember to tweak the service to match your current case and paths. + + As we'll probably need to enter our `pass` password we need the service to start once we've logged into the graphical interface. + + **Monitor the automation** + + It's always nice to know if the system is working as expected without adding mental load. To do that I'm creating the next [loki](loki.md) rules: + + ```yaml + groups: + - name: account_sync + rules: + - alert: AccountSyncIsNotRunningWarning + expr: | + (sum by(hostname) (count_over_time({job="systemd-journal", syslog_identifier="account_syncer"}[15m])) or sum by(hostname) (count_over_time({hostname="my_computer"} [15m])) * 0 ) == 0 + for: 0m + labels: + severity: warning + annotations: + summary: "The account sync script is not running {{ $labels.hostname}}" + - alert: AccountSyncIsNotRunningError + expr: | + (sum by(hostname) (count_over_time({job="systemd-journal", syslog_identifier="account_syncer"}[3h])) or sum by(hostname) (count_over_time({hostname="my_computer"} [3h])) * 0 ) == 0 + for: 0m + labels: + severity: error + annotations: + summary: "The account sync script has been down for at least 3 hours {{ $labels.hostname}}" + - alert: AccountSyncError + expr: | + count(rate({job="systemd-journal", syslog_identifier="account_syncer"} |= `` | logfmt | level_extracted=`error` [5m])) > 0 + for: 0m + labels: + severity: warning + annotations: + summary: "There are errors in the account sync log at {{ $labels.hostname}}" + + - alert: EmailAccountIsOutOfSyncLyz + expr: | + (sum by(hostname) (count_over_time({job="systemd-journal", syslog_identifier="account_syncer"} | logfmt | msg=`Syncing emails for lyz...`[1h])) or sum by(hostname) (count_over_time({hostname="my_computer"} [1h])) * 0 ) == 0 + for: 0m + labels: + severity: error + annotations: + summary: "The email account lyz has been out of sync for 1h {{ $labels.hostname}}" + + - alert: CalendarAccountIsOutOfSyncLyz + expr: | + (sum by(hostname) (count_over_time({job="systemd-journal", syslog_identifier="account_syncer"} | logfmt | msg=`Syncing calendar for lyz...`[3h])) or sum by(hostname) (count_over_time({hostname="my_computer"} [3h])) * 0 ) == 0 + for: 0m + labels: + severity: error + annotations: + summary: "The calendar account lyz has been out of sync for 3h {{ $labels.hostname}}" + ``` + Where: + - You need to change `my_computer` for the hostname of the device running the service + - Tweak the OutOfSync alerts to match your account (change the `lyz` part). + + These rules will raise: + - A warning if the sync has not shown any activity in the last 15 minutes. + - An error if the sync has not shown any activity in the last 3 hours. + - An error if there is an error in the logs of the automation. + +### [Fava Dashboards](dino.md) + +* New: Disable automatic OMEMO key acceptance. + + Dino automatically accepts new OMEMO keys from your own other devices and your chat partners by default. This default behaviour leads to the fact that the admin of the XMPP server could inject own public OMEMO keys without user verification, which enables the owner of the associated private OMEMO keys to decrypt your OMEMO secured conversation without being easily noticed. + + To prevent this, two actions are required, the second consists of several steps and must be taken for each new chat partner. + + - First, the automatic acceptance of new keys from your own other devices must be deactivated. Configure this in the account settings of your own accounts. + - Second, the automatic acceptance of new keys from your chat partners must be deactivated. Configure this in the contact details of every chat partner. Be aware that in the case of group chats, the entire communication can be decrypted unnoticed if even one partner does not actively deactivate automatic acceptance of new OMEMO keys. + + Always confirm new keys from your chat partner before accepting them manually + +* New: [Dino does not use encryption by default.](dino.md#dino-does-not-use-encryption-by-default) + + You have to initially enable encryption in the conversation window by clicking the lock-symbol and choose OMEMO. Future messages and file transfers to this contact will be encrypted with OMEMO automatically. + + - Every chat partner has to enable encryption separately. + - If only one of two chat partner has activated OMEMO, only this part of the communication will be encrypted. The same applies with file transfers. + - If you get a message "This contact does not support OMEMO" make sure that your chatpartner has accepted the request to add him to your contact list and you accepted vice versa + +* New: [Install in Tails.](dino.md#install-in-tails) + + If you have more detailed follow [this article](https://t-hinrichs.net/DinoTails/DinoTails_recent.html) at the same time as you read this one. That one is more outdated but more detailed. + + - Boot a clean Tails + - Create and configure the Persistent Storage + - Restart Tails and open the Persistent Storage + + - Configure the persistence of the directory: + ```bash + echo -e '/home/amnesia/.local/share/dino source=dino' | sudo tee -a /live/persistence/TailsData_unlocked/persistence.conf > /dev/null + ``` + - Restart Tails + + - Install the application: + ```bash + sudo apt-get update + sudo apt-get install dino-im + ``` + - Configure the `dino-im` alias to use `torsocks` + + ```bash + sudo echo 'alias dino="torsocks dino-im &> /dev/null &"' >> /live/persistence/TailsData_unlocked/dotfiles/.bashrc + echo 'alias dino="torsocks dino-im &> /dev/null &"' >> ~/.bashrc + ``` + +* New: Introduce Fava Dashboards. + + **Installation** + + ```bash + pip install git+https://github.com/andreasgerstmayr/fava-dashboards.git + ``` + + Enable this plugin in Fava by adding the following lines to your ledger: + + ```beancount + 2010-01-01 custom "fava-extension" "fava_dashboards" + ``` + + Then you'll need to [create a `dashboards.yaml`](#configuration) file where your ledger lives. + + **[Configuration](https://github.com/andreasgerstmayr/fava-dashboards/tree/main?tab=readme-ov-file#configuration)** + + The plugin looks by default for a `dashboards.yaml` file in the directory of the Beancount ledger (e.g. if you run `fava personal.beancount`, the `dashboards.yaml` file should be in the same directory as `personal.beancount`). + + The configuration file can contain multiple dashboards, and a dashboard contains one or more panels. + A panel has a relative width (e.g. `50%` for 2 columns, or `33.3%` for 3 column layouts) and a absolute height. + + The `queries` field contains one or multiple queries. + The Beancount query must be stored in the `bql` field of the respective query. + It can contain Jinja template syntax to access the `panel` and `ledger` variables described below (example: use `{{ledger.ccy}}` to access the first configured operating currency). + The query results can be accessed via `panel.queries[i].result`, where `i` is the index of the query in the `queries` field. + + Note: Additionally to the Beancount query, Fava's filter bar further filters the available entries of the ledger. + + Common code for utility functions can be defined in the dashboards configuration file, either inline in `utils.inline` or in an external file defined in `utils.path`. + + *Start your configuration* + + It's best to tweak the example than to start from scratch. Get the example by: + ```bash + cd $(mktemp -d) + git clone https://github.com/andreasgerstmayr/fava-dashboards + cd fava-dashboards/example + fava example.beancount + ``` + + **Configuration reference** + + HTML, echarts and d3-sankey panels: + The `script` field must contain valid JavaScript code. + It must return a valid configuration depending on the panel `type`. + The following variables and functions are available: + * `ext`: the Fava [`ExtensionContext`](https://github.com/beancount/fava/blob/main/frontend/src/extensions.ts) + * `ext.api.get("query", {bql: "SELECT ..."}`: executes the specified BQL query + * `panel`: the current (augmented) panel definition. The results of the BQL queries can be accessed with `panel.queries[i].result`. + * `ledger.dateFirst`: first date in the current date filter + * `ledger.dateLast`: last date in the current date filter + * `ledger.operatingCurrencies`: configured operating currencies of the ledger + * `ledger.ccy`: shortcut for the first configured operating currency of the ledger + * `ledger.accounts`: declared accounts of the ledger + * `ledger.commodities`: declared commodities of the ledger + * `helpers.urlFor(url)`: add current Fava filter parameters to url + * `utils`: the return value of the `utils` code of the dashboard configuration + + Jinja2 panels: + The `template` field must contain valid Jinja2 template code. + The following variables are available: + * `panel`: see above + * `ledger`: see above + * `favaledger`: a reference to the `FavaLedger` object + + *Common Panel Properties* + * `title`: title of the panel. Default: unset + * `width`: width of the panel. Default: 100% + * `height`: height of the panel. Default: 400px + * `link`: optional link target of the panel header. + * `queries`: a list of dicts with a `bql` attribute. + * `type`: panel type. Must be one of `html`, `echarts`, `d3_sankey` or `jinja2`. + + HTML panel + The `script` code of HTML panels must return valid HTML. + The HTML code will be rendered in the panel. + + ECharts panel + The `script` code of [Apache ECharts](https://echarts.apache.org) panels must return valid [Apache ECharts](https://echarts.apache.org) chart options. + Please take a look at the [ECharts examples](https://echarts.apache.org/examples) to get familiar with the available chart types and options. + + d3-sankey panel + The `script` code of d3-sankey panels must return valid d3-sankey chart options. + Please take a look at the example dashboard configuration [dashboards.yaml](example/dashboards.yaml). + + Jinja2 panel + The `template` field of Jinja2 panels must contain valid Jinja2 template code. + The rendered template will be shown in the panel. + + **Debugging** + + Add `console.log` strings in the javascript code to debug it. + + **References** + + - [Code](https://github.com/andreasgerstmayr/fava-dashboards/tree/main?tab=readme-ov-file#configuration) + - [Article](https://www.andreasgerstmayr.at/2023/03/12/dashboards-with-beancount-and-fava.html) + + **Examples** + + - [Fava Portfolio returns](https://github.com/andreasgerstmayr/fava-portfolio-returns) + - [Fava investor](https://github.com/andreasgerstmayr/fava-portfolio-returns) + +* New: [Dashboard prototypes.](fava_dashboards.md#dashboard-prototypes) + + **Vertical bars with one serie using year** + + ```yaml + - title: Net Year Profit 💰 + width: 50% + link: /beancount/income_statement/ + queries: + - bql: | + SELECT year, sum(position) AS value + WHERE + account ~ '^Expenses:' OR + account ~ '^Income:' + GROUP BY year + + link: /beancount/balance_sheet/?time={time} + type: echarts + script: | + const currencyFormatter = utils.currencyFormatter(ledger.ccy); + const years = utils.iterateYears(ledger.dateFirst, ledger.dateLast) + const amounts = {}; + + // the beancount query only returns periods where there was at least one matching transaction, therefore we group by period + for (let row of panel.queries[0].result) { + amounts[`${row.year}`] = -row.value[ledger.ccy]; + } + + return { + tooltip: { + trigger: "axis", + valueFormatter: currencyFormatter, + }, + xAxis: { + data: years, + }, + yAxis: { + axisLabel: { + formatter: currencyFormatter, + }, + }, + series: [ + { + type: "bar", + data: years.map((year) => amounts[year]), + color: utils.green, + }, + ], + }; + ``` + + **Vertical bars using one serie using quarters** + + ```yaml + - title: Net Quarter Profit 💰 + width: 50% + link: /beancount/income_statement/ + queries: + - bql: | + SELECT quarter(date) as quarter, sum(position) AS value + WHERE + account ~ '^Expenses:' OR + account ~ '^Income:' + GROUP BY quarter + + link: /beancount/balance_sheet/?time={time} + type: echarts + script: | + const currencyFormatter = utils.currencyFormatter(ledger.ccy); + const quarters = utils.iterateQuarters(ledger.dateFirst, ledger.dateLast).map((q) => `${q.year}-${q.quarter}`); + const amounts = {}; + + // the beancount query only returns periods where there was at least one matching transaction, therefore we group by period + for (let row of panel.queries[0].result) { + amounts[`${row.quarter}`] = -row.value[ledger.ccy]; + } + + return { + tooltip: { + trigger: "axis", + valueFormatter: currencyFormatter, + }, + xAxis: { + data: quarters, + }, + yAxis: { + axisLabel: { + formatter: currencyFormatter, + }, + }, + series: [ + { + type: "bar", + data: quarters.map((quarter) => amounts[quarter]), + }, + ], + }; + ``` + + **Vertical bars showing the evolution of one query over the months** + + ```yaml + - title: Net Year Profit Distribution 💰 + width: 50% + link: /beancount/income_statement/ + queries: + - bql: | + SELECT year, month, sum(position) AS value + WHERE + account ~ '^Expenses:' OR + account ~ '^Income:' + GROUP BY year, month + link: /beancount/balance_sheet/?time={time} + type: echarts + script: | + const currencyFormatter = utils.currencyFormatter(ledger.ccy); + const years = utils.iterateYears(ledger.dateFirst, ledger.dateLast); + const amounts = {}; + + for (let row of panel.queries[0].result) { + if (!amounts[row.year]) { + amounts[row.year] = {}; + } + amounts[row.year][row.month] = -row.value[ledger.ccy]; + } + + return { + tooltip: { + valueFormatter: currencyFormatter, + }, + legend: { + top: "bottom", + }, + xAxis: { + data: ['0','1','2','3','4','5','6','7','8','9','10','11','12'], + }, + yAxis: { + axisLabel: { + formatter: currencyFormatter, + }, + }, + series: years.map((year) => ({ + type: "bar", + name: year, + data: Object.values(amounts[year]), + label: { + show: false, + formatter: (params) => currencyFormatter(params.value), + }, + })), + }; + ``` + +### [Rocketchat](rocketchat.md) + +* New: [How to use Rocketchat's API.](rocketchat.md#api) + + The API docs are a bit weird, you need to go to [endpoints](https://developer.rocket.chat/reference/api/rest-api/endpoints) and find the one you need. Your best bet though is to open the browser network console and see which requests they are doing and then to find them in the docs. + +* New: [Add end of life link.](rocketchat.md#references) + + Warning they only support 6 months of versions! and they advice you with + 12 days that you'll loose service if you don't update. + + - [End of life for the versions](https://docs.rocket.chat/docs/version-durability) + +## Content Management + +### [Jellyfin](moonlight.md) + +* New: Introduce moonlight. + + [Moonlight](https://github.com/moonlight-stream/moonlight-docs/wiki) is an open source client implementation of NVIDIA GameStream that allows you to to stream your collection of games and apps from your GameStream-compatible PC to another device on your network or the Internet. You can play your favorite games on your PC, phone, tablet, or TV with Moonlight.. + + References: + + - [Home](https://moonlight-stream.org/) + - [Docs](https://github.com/moonlight-stream/moonlight-docs/wiki) + +* New: [Python library.](jellyfin.md#python-library) + + [This is the API client](https://github.com/jellyfin/jellyfin-apiclient-python/tree/master) from Jellyfin Kodi extracted as a python package so that other users may use the API without maintaining a fork of the API client. Please note that this API client is not complete. You may have to add API calls to perform certain tasks. + + It doesn't (yet) support async + +* New: [Troubleshoot pSystem.InvalidOperationException: There is an error in XML document (0, 0).](jellyfin.md#system.invalidoperationexception:-there-is-an-error-in-xml-document-0,-0) + + This may happen if you run out of disk and some xml file in the jellyfin data directory becomes empty. The solution is to restore that file from backup. + +* New: [Enable hardware transcoding.](jellyfin.md#enable-hardware-transcoding) + + **[Enable NVIDIA hardware transcoding](https://jellyfin.org/docs/general/administration/hardware-acceleration/nvidia)** + + *Remove the artificial limit of concurrent NVENC transcodings* + + Consumer targeted [Geforce and some entry-level Quadro cards](https://developer.nvidia.com/video-encode-and-decode-gpu-support-matrix-new) have an artificial limit on the number of concurrent NVENC encoding sessions (max of 8 on most modern ones). This restriction can be circumvented by applying an unofficial patch to the NVIDIA Linux and Windows driver. + + To apply the patch: + + First check that your current version is supported `nvidia-smi`, if it's not try to upgrade the drivers to a supported one, or think if you need more than 8 transcodings. + ```bash + wget https://raw.githubusercontent.com/keylase/nvidia-patch/refs/heads/master/patch.sh + chmod +x patch.sh + ./patch.sh + ``` + + If you need to rollback the changes run `./patch.sh -r`. + + You can also patch it [within the docker itself](https://github.com/keylase/nvidia-patch?tab=readme-ov-file#docker-support) + + ```yaml + services: + jellyfin: + image: jellyfin/jellyfin + user: 1000:1000 + network_mode: 'host' + volumes: + - /path/to/config:/config + - /path/to/cache:/cache + - /path/to/media:/media + runtime: nvidia + deploy: + resources: + reservations: + devices: + - driver: nvidia + count: all + capabilities: [gpu] + ``` + + Restart the docker and then check that you can access the graphics card with: + + ```bash + docker exec -it jellyfin nvidia-smi + ``` + + Enable NVENC in Jellyfin and uncheck the unsupported codecs. + + **Tweak the docker-compose** + + The official Docker image doesn't include any NVIDIA proprietary driver. + + You have to install the NVIDIA driver and NVIDIA Container Toolkit on the host system to allow Docker access to your GPU. + +### [Immich](ombi.md) + +* New: [Set default quality of request per user.](ombi.md#set-default-quality-of-request-per-user) + + Sometimes one specific user continuously asks for a better quality of the content. If you go into the user configuration (as admin) you can set the default quality profiles for that user. + +* New: Introduce immich. + + Self-hosted photo and video backup solution directly from your mobile phone. + + References: + + - [Home](https://immich.app/) + - [Api](https://immich.app/docs/api) + - [Docs](https://immich.app/docs/overview/introduction) + - [Source](https://github.com/immich-app/immich). + - [Blog](https://immich.app/blog) + - [Demo](https://demo.immich.app/photos) + +* New: [Installation.](immich.md#installation) + + - Create a directory of your choice (e.g. `./immich-app`) to hold the `docker-compose.yml` and `.env` files. + + ```bash + mkdir ./immich-app + cd ./immich-app + ``` + + - Download `docker-compose.yml`, `example.env` and optionally the `hwaccel.yml` files: + + ```bash + wget -O docker-compose.yaml https://github.com/immich-app/immich/releases/latest/download/dockr-compose.yml + wget -O .env https://github.com/immich-app/immich/releases/latest/download/example.env + wget https://github.com/immich-app/immich/releases/latest/download/hwaccel.yml + ``` + - Tweak those files with these thoughts in mind: + - `immich` won't respect your upload media directory structure, so until you trust the softwar copy your media to the uploads directory. + - immich is not stable so you need to disable the upgrade from watchtower. The easiest way is to [pin the latest stable version](https://github.com/immich-app/immich/pkgs/container/immich-server/versions?filters%5Bversion_type%5D=tagged) in the `.env` file. + - Populate custom database information if necessary. + - Populate `UPLOAD_LOCATION` with your preferred location for storing backup assets. + - Consider changing `DB_PASSWORD` to something randomly generated + + - From the directory you created in Step 1, (which should now contain your customized `docker-compose.yml` and `.env` files) run: + + ```bash + docker compose up -d + ``` + +* New: [Configure smart search for other language.](immich.md#configure-smart-search-for-other-language) + + You can change to a multilingual model listed [here](https://huggingface.co/collections/immich-app/multilingual-clip-654eb08c2382f591eeb8c2a7) by going to Administration > Machine Learning Settings > Smart Search and replacing the name of the model. + + Choose the one that has more downloads. For example, if you'd want the ` + +immich-app/XLM-Roberta-Large-Vit-B-16Plus` model, you should only enter `XLM-Roberta-Large-Vit-B-16Plus` in the program configuration. Be careful not to add trailing whitespaces. + + Be sure to re-run Smart Search on all assets after this change. You can then search in over 100 languages. + +* New: [External storage.](immich.md#external-storage) + + If you have an already existing library somewhere immich is installed you can use an [external library](https://immich.app/docs/guides/external-library). Immich will respect the files on that directory. + + It won't create albums from the directory structure. If you want to do that check [this](https://github.com/alvistar/immich-albums) or [this](https://gist.github.com/REDVM/d8b3830b2802db881f5b59033cf35702) solutions. + +* New: [My personal workflow.](immich.md#my-personal-workflow) + + I've tailored a personal workflow given the next thoughts: + + - I don't want to expose Immich to the world, at least until it's a stable product. + - I already have in place a sync mechanism with [syncthing](syncthing.md) for all the mobile stuff + - I do want to still be able to share some albums with my friends and family. + - I want some mobile directories to be cleaned after importing the data (for example the `camera/DCIM`), but others should leave the files as they are after the import (OsmAnd+ notes). + + Ingesting the files: + + As all the files I want to ingest are sent to the server through syncthing, I've created a cron script that copies or moves the required files. Something like: + + ```bash + + date + echo 'Updating the OsmAnd+ data' + rsync -auhvEX --progress /data/apps/syncthing/data/Osmand/avnotes /data/media/pictures/unclassified + + echo 'Updating the Camera data' + mv /data/apps/syncthing/data/camera/Camera/* /data/media/pictures/unclassified/ + + echo 'Cleaning laptop home' + mv /data/media/downloads/*jpeg /data/media/downloads/*jpg /data/media/downloads/*png /data/media/pictures/unclassified/ + echo + ``` + + Where : + + - `/data/media/pictures/unclassified` is a subpath of my [external library](#external-library). + - The last echo makes sure that the program exits with a return code of `0`. The script is improbable as it only takes into account the happy path, and I'll silently miss errors on it's execution. But as a first iteration it will do the job. + + Then run the script in a cron and log the output to [`journald`](journald.md): + + ```cron + 0 0 * * * /bin/bash /usr/local/bin/archive-photos.sh | /usr/bin/logger -t archive_fotos + ``` + + Make sure to configure the update library cron job to run after this script has ended. + +* New: [Not there yet.](immich.md#not-there-yet) + + There are some features that are still lacking: + + - [Image rotation](https://github.com/immich-app/immich/discussions/1695) + - [Smart albums](https://github.com/immich-app/immich/discussions/1673) + - [Image rating](https://github.com/immich-app/immich/discussions/3619) + - [Tags](https://github.com/immich-app/immich/discussions/1651) + - [Nested albums](https://github.com/immich-app/immich/discussions/2073#discussioncomment-6584926) + - [Duplication management](https://github.com/immich-app/immich/discussions/1968) + - [Search guide](https://github.com/immich-app/immich/discussions/3657) + +* New: [Edit an image metadata.](immich.md#edit-an-image-metadata) + + You can't do it directly through the interface yet, use [exiftool](linux_snippets.md#Remove-image-metadata) instead. + + This is interesting to remove the geolocation of the images that are not yours + +* New: [Keyboard shortcuts.](immich.md#keyboard-shortcuts) + + You can press `?` to see the shortcuts. Some of the most useful are: + + - `f`: Toggle favourite + - `Shift+a`: Archive element + +### [Mediatracker](mediatracker.md) + +* New: [How to use the mediatracker API.](mediatracker.md#api) + + I haven't found a way to see the api docs from my own instance. Luckily you can browse it [at the official instance](https://bonukai.github.io/MediaTracker/). + + You can create an application token on your user configuration. Then you can use it with something similar to: + + ```bash + curl -H 'Content-Type: application/json' https://mediatracker.your-domain.org/api/logs\?token\=your-token | jq + ``` + +* New: Introduce python library. + + There is a [python library](https://github.com/jonkristian/pymediatracker) although it's doesn't (yet) have any documentation and the functionality so far is only to get information, not to push changes. + +* New: [Get list of tv shows.](mediatracker.md#get-list-of-tv-shows) + + With `/api/items?mediaType=tv` you can get a list of all tv shows with the next interesting fields: + + - `id`: mediatracker id + - `tmdbId`: + - `tvdbId`: + - `imdbId`: + - `title`: + - `lastTimeUpdated`: epoch time + - `lastSeenAt`: epoch time + - `seen`: bool + - `onWatchlist`: bool + - `firstUnwatchedEpisode`: + - `id`: mediatracker episode id + - `episodeNumber`: + - `seasonNumber` + - `tvShowId`: + - `seasonId`: + - `lastAiredEpisode`: same schema as before + + Then you can use the `api/details/{mediaItemId}` endpoint to get all the information of all the episodes of each tv show. + +* New: [Add missing books.](mediatracker.md#add-missing-books) + + - Register an account in openlibrary.com + - Add the book + - Then add it to mediatracker + +### [ffmpeg](ffmpeg.md) + +* New: [Reduce the video size.](ffmpeg.md#reduce-the-video-size) + + If you don't mind using `H.265` replace the libx264 codec with libx265, and push the compression lever further by increasing the CRF value — add, say, 4 or 6, since a reasonable range for H.265 may be 24 to 30. Note that lower CRF values correspond to higher bitrates, and hence produce higher quality videos. + + ```bash + ffmpeg -i input.mp4 -vcodec libx265 -crf 28 output.mp4 + ``` + + If you want to stick to H.264 reduce the bitrate. You can check the current one with `ffprobe input.mkv`. Once you've chosen the new rate change it with: + + ```bash + ffmpeg -i input.mp4 -b 3000k output.mp4 + ``` + + Additional options that might be worth considering is setting the Constant Rate Factor, which lowers the average bit rate, but retains better quality. Vary the CRF between around 18 and 24 — the lower, the higher the bitrate. + + ```bash + ffmpeg -i input.mp4 -vcodec libx264 -crf 20 output.mp4 + ``` + +### [Photo management](photo_self_hosted.md) + +* New: Do comparison of selfhosted photo software. + + There are [many alternatives to self host a photo management software](https://awesome-selfhosted.net/tags/photo-and-video-galleries.html), here goes my personal comparison. You should complement this article with [meichthys one](https://meichthys.github.io/foss_photo_libraries/). + + !!! note "TL;DR: I'd first go with Immich, then LibrePhotos and then LycheeOrg" + + | Software | Home-Gallery | Immich | LibrePhotos | + | --- | --- | --- | --- | + | UI | Fine | Good | Fine | + | Popular (stars) | 614 | 25k | 6k | + | Active (PR/Issues)(1) | ? | 251/231 | 27/16 | + | Easy deployment | ? | True | Complicated| + | Good docs | True | True | True | + | Stable | True | False | True | + | Smart search | ? | True | True | + | Language | Javascript | Typescript | Python | + | Batch edit | True | True | ? | + | Multi-user | False | True | ? | + | Mobile app | ? | True | ? | + | Oauth support | ? | True | ? | + | Facial recognition | ? | True | ? | + | Scales well | False | True | ? | + | Favourites | ? | True | ? | + | Archive | ? | True | ? | + | Has API | True | True | ? | + | Map support | True | True | ? | + | Video Support | True | True | ? | + | Discover similar | True | True | ? | + | Static site | True | False | ? | + + - (1): It refers to the repository stats of the last month + + **[Immich](immich.md)**: + + References: + + - [Home](https://immich.app/) + - [Demo](https://demo.immich.app/photos) + - [Source](https://github.com/immich-app/immich). + + Pros: + - Smart search is awesome Oo + - create shared albums that people can use to upload and download + - map with leaflet + - explore by people and places + - docker compose + - optional [hardware acceleration](https://immich.app/docs/features/hardware-transcoding) + - very popular 25k stars, 1.1k forks + - has a [CLI](https://immich.app/docs/features/command-line-interface) + - can [load data from a directory](https://immich.app/docs/features/libraries) + - It has an [android app on fdroid to automatically upload media](https://immich.app/docs/features/mobile-app) + - [sharing libraries with other users](https://immich.app/docs/features/partner-sharing) and with the public + - favorites and archive + - public sharing + - oauth2, specially with [authentik <3](https://immich.app/docs/administration/oauth) + - extensive api: https://immich.app/docs/api/introduction + - It has an UI similar to google photos, so it would be easy for non technical users to use. + - Batch edit + - Discover similar through the smart search + + Cons: + + - If you want to get results outside the smart search you are going to have a bad time. There is still no way to filter the smart search results or even sort them. You're sold to the AI. + - dev suggests not to use watchtower as the project is in unstable alpha + - Doesn't work well in firefox + - It doesn't work with tags which you don't need because the smart search is so powerful. + - Scans pictures on the file system + + **[LibrePhotos](https://docs.librephotos.com/)**: + + References: + + - [Source](https://github.com/LibrePhotos/librephotos) + - [Docs](https://docs.librephotos.com/docs/intro) + - [Demo](https://demo2.librephotos.com/login) + - [Outdated comparison](https://docs.librephotos.com/docs/user-guide/features) + + Pros: + + - [docker compose](https://docs.librephotos.com/docs/installation/standard-install), although you need to build the dockers yourself + - [android app](https://docs.librephotos.com/docs/user-guide/mobile/) + - 6k stars, 267 forks + - object, scene ai extraction + + Cons: + + - Not as good as Immich. + + **[Home-Gallery](https://docs.home-gallery.org/general.html)**: + + You can see the demo [here](https://demo.home-gallery.org/). + + Nice features: + + - Simple UI + - Discover similar images + - Static site generator + - Shift click to do batch editing + + Cons: + + - All users see all media + - The whole database is loaded into the browser and requires recent (mobile) devices and internet connection + - Current tested limits are about 400,000 images/videos + + **Lycheeorg**: + + References: + + - [Home](https://lycheeorg.github.io/) + - [Docs](https://lycheeorg.github.io/docs) + - [Source](https://github.com/LycheeOrg/Lychee) + + Pros: + + - Sharing like it should be. One click and every photo and album is ready for the public. You can also protect albums with passwords if you want. It's under your control. + - Manual tags + - apparently safe upgrades + - docker compose + - 2.9k stars + + Cons: + - demo doesn't show many features + - no ai + + **Photoview**: + + - [Home](https://photoview.github.io/) + - [Source](https://github.com/photoview/photoview) + - [Docs](https://photoview.github.io/en/docs/usage-people/) + + Pros: + + - Syncs with file system + - Albums and individual photos or videos can easily be shared by generating a public or password protected link. + - users support + - maps support + - 4.4k stars + - Face recognition + + Cons: + + - Demo difficult to understand as it's not in english + - mobile app only for ios + - last commit 6 months ago + + **Pigallery2**: + + References: + + - [Home](https://bpatrik.github.io/pigallery2/) + + Pros: + + - map + - The gallery also supports *.gpx file to show your tracked path on the map too + - App supports full boolean logic with negation and exact or wildcard search. It also provides handy suggestions with autocomplete. + - face recognitiom: PiGallery2 can read face reagions from photo metadata. Current limitation: No ML-based, automatic face detection. + - rating and grouping by rating + - easy query builder + - video transcoding + - blog support. Markdown based blogging support + + You can write some note in the *.md files for every directory + + - You can create logical albums (a.k.a.: Saved search) from any search query. Current limitation: It is not possible to create albums from a manually picked photos. + - PiGallery2 has a rich settings page where you can easily set up the gallery. + + Cons: + - no ml face recognition + + **Piwigo**: + + References: + + - [Home](piwigo.org) + - [Source](https://github.com/Piwigo/Piwigo) + + Piwigo is open source photo management software. Manage, organize and share your photo easily on the web. Designed for organisations, teams and individuals + + Pros: + + - Thousands of organizations and millions of individuals love using Piwigo + - shines when it comes to classifying thousands or even hundreds of thousands of photos. + - Born in 2002, Piwigo has been supporting its users for more than 21 years. Always evolving! + - You can add photos with the web form, any FTP client ora desktop application like digiKam, Shotwell, Lightroom ormobile applications. + - Filter photos from your collection, make a selection and apply actions in batch: change the author, add some tags, associate to a new album, set geolocation... + - Make your photos private and decide who can see each of them. You can set permissions on albums and photos, for groups or individual users. + - Piwigo can read GPS latitude and longitude from embedded metadata. Then, with plugin for Google Maps or OpenStreetMap, Piwigo can display your photos on an interactive map. + - Change appearance with themes. Add features with plugins. Extensions require just a few clicks to get installed. 350 extensions available, and growing! + - With the Fotorama plugin, or specific themes such as Bootstrap Darkroom, you can experience the full screen slideshow. + - Your visitors can post comments, give ratings, mark photos as favorite, perform searches and get notified of news by email. + - Piwigo web API makes it easy for developers to perform actions from other applications + - GNU General Public License, or GPL + - 2.9 k stars, 400 forks + - still active + - nice release documents: https://piwigo.org/release-14.0.0 + + Cons: + + - Official docs don't mention docker + - no demo: https://piwigo.org/demo + - Unpleasant docs: https://piwigo.org/doc/doku.php + - Awful plugin search: https://piwigo.org/ext/ + + **[Damselfly](https://damselfly.info/)**: + + Fast server-based photo management system for large collections of images. Includes face detection, face & object recognition, powerful search, and EXIF Keyword tagging. Runs on Linux, MacOS and Windows. + + Very ugly UI + + **[Saigal](https://github.com/saimn/sigal)**: + + Too simple + + **[Spis](https://github.com/gbbirkisson/spis)**: + + Low number of maintainers + Too simple + +### [Kodi](kodi.md) + +* New: Start working on a migration script to mediatracker. +* New: [Extract kodi data from the database.](kodi.md#from-the-database) + + At `~/.kodi/userdata/Database/MyVideos116.db` you can extract the data from the next tables: + + - In the `movie_view` table there is: + - `idMovie`: kodi id for the movie + - `c00`: Movie title + - `userrating` + - `uniqueid_value`: The id of the external web service + - `uniqueid_type`: The web it extracts the id from + - `lastPlayed`: The reproduction date + - In the `tvshow_view` table there is: + - `idShow`: kodi id of a show + - `c00`: title + - `userrating` + - `lastPlayed`: The reproduction date + - `uniqueid_value`: The id of the external web service + - `uniqueid_type`: The web it extracts the id from + - In the `season_view` there is no interesting data as the userrating is null on all rows. + - In the `episode_view` table there is: + - `idEpisodie`: kodi id for the episode + - `idShow`: kodi id of a show + - `idSeason: kodi id of a season + - `c00`: title + - `userrating` + - `lastPlayed`: The reproduction date + - `uniqueid_value`: The id of the external web service + - `uniqueid_type`: The web it extracts the id from. I've seen mainly tvdb and sonarr + - Don't use the `rating` table as it only stores the ratings from external webs such as themoviedb: + +## [Knowledge Management](knowledge_management.md) + +* New: Use ebops to create anki cards. + + - Ask the AI to generate [Anki cards](anki.md) based on the content. + - Save those anki cards in an orgmode (`anki.org`) document + - Use [`ebops add-anki-notes`](https://codeberg.org/lyz/ebops) to automatically add them to Anki + +### [Anki](anki.md) + +* New: [What to do when you need to edit a card but don't have the time.](anki.md#what-to-do-when-you-need-to-edit-a-card-but-don't-have-the-time) + + You can mark it with a red flag so that you remember to edit it the next time you see it. + +* New: [Center images.](mkdocs.md#center-images) + + In your config enable the `attr_list` extension: + + ```yaml + markdown_extensions: + - attr_list + ``` + + On your `extra.css` file add the `center` class + + ```css + .center { + display: block; + margin: 0 auto; + } + ``` + + Now you can center elements by appending the attribute: + + ~~~markdown + ![image](../_imatges/ebc_form_01.jpg){: .center} + ~~~ + +* New: [Install the official sync server.](anki.md#install-the-official-sync-server) + +### [Analytical web reading](analytical_web_reading.md) + +* New: Introduce Analytical web reading. + + One part of the web 3.0 is to be able to annotate and share comments on the web. This article is my best try to find a nice open source privacy friendly tool. Spoiler: there aren't any :P + + The alternative I'm using so far is to process the data at the same time as I underline it. + + - At the mobile/tablet you can split your screen and have Orgzly on one tab and the browser in the other. So that underlining, copy and paste doesn't break too much the workflow. + - At the eBook I underline it and post process it after. + + The idea of using an underlining tool makes sense in the case to post process the content in a more efficient environment such as a laptop. + + The use of Orgzly is kind of a preprocessing. If the underlining software can easily export the highlighted content along with the link to the source then it would be much quicker + + The advantage of using Orgzly is also that it works today both online and offline and it is more privacy friendly. + + On the post I review some of the existent solutions + +### [Digital Gardens](digital_garden.md) + +* New: [Add the not by AI badge.](digital_garden.md#add-the-not-by-ai-badge) + + [Not by AI](https://notbyai.fyi/) is an initiative to mark content as created by humans instead of AI. + + To automatically add the badge to all your content you can use the next script: + + ```bash + + echo "Checking the Not by AI badge" + find docs -iname '*md' -print0 | while read -r -d $'\0' file; do + if ! grep -q not-by-ai.svg "$file"; then + echo "Adding the Not by AI badge to $file" + echo "[![](not-by-ai.svg){: .center}](https://notbyai.fyi)" >>"$file" + fi + done + ``` + You can see how it's used in this blog by looking at the `Makefile` and the `gh-pages.yaml` workflow. + +### [Aleph](aleph.md) + +* New: [Add note on aleph and prometheus.](aleph.md#monitorization) + + Aleph now exposes prometheus metrics on the port 9100 + +* New: [Debug ingestion errors.](aleph.md#debug-ingestion-errors) + + Assuming that you've [set up Loki to ingest your logs](https://github.com/alephdata/aleph/issues/2124) I've so far encountered the next ingest issues: + + - `Cannot open image data using Pillow: broken data stream when reading image files`: The log trace that has this message also contains a field `trace_id` which identifies the ingestion process. With that `trace_id` you can get the first log trace with the field `logger = "ingestors.manager"` which will contain the file path in the `message` field. Something similar to `Ingestor []` + - A traceback with the next string `Failed to process: Could not extract PDF file: FileDataError('cannot open broken document')`: This log trace has the file path in the `message` field. Something similar to `[] Failed to process: Could not extract PDF file: FileDataError('cannot open broken document')` + + I thought of making a [python script to automate the files that triggered an error](loki.md#interact-with-loki-through-python), but in the end I extracted the file names manually as they weren't many. + + Once you have the files that triggered the errors, the best way to handle them is to delete them from your investigation and ingest them again. + +* New: [Add support channel.](aleph.md#references) + + [Support chat](https://alephdata.slack.com) + +* New: [API Usage.](aleph.md#api-usage) + + The Aleph web interface is powered by a Flask HTTP API. Aleph supports an extensive API for searching documents and entities. It can also be used to retrieve raw metadata, source documents and other useful details. Aleph's API tries to follow a pragmatic approach based on the following principles: + + - All API calls are prefixed with an API version; this version is /api/2/. + - Responses and requests are both encoded as JSON. Requests should have the Content-Type and Accept headers set to application/json. + - The application uses Representational State Transfer (REST) principles where convenient, but also has some procedural API calls. + - The API allows API Authorization via an API key or JSON Web Tokens. + + **[Authentication and authorization](https://redocly.github.io/redoc/?url=https://aleph.occrp.org/api/openapi.json#section/Authentication-and-Authorization)** + + By default, any Aleph search will return only public documents in responses to API requests. + + If you want to access documents which are not marked public, you will need to sign into the tool. This can be done through the use on an API key. The API key for any account can be found by clicking on the "Profile" menu item in the navigation menu. + + The API key must be sent on all queries using the Authorization HTTP header: + + Authorization: ApiKey 363af1e2b03b41c6b3adc604956e2f66 + + Alternatively, the API key can also be sent as a query parameter under the api_key key. + + Similarly, a JWT can be sent in the Authorization header, after it has been returned by the login and/or OAuth processes. Aleph does not use session cookies or any other type of stateful API. + +* New: [Crossreferencing mentions with entities.](aleph.md#crossreferencing-mentions-with-entities) + + [Mentions](https://docs.aleph.occrp.org/developers/explanation/cross-referencing/#mentions) are names of people or companies that Aleph automatically extracts from files you upload. Aleph includes mentions when cross-referencing a collection, but only in one direction. + + Consider the following example: + + - "Collection A" contains a file. The file mentions "John Doe". + - "Collection B" contains a Person entity named "John Doe". + + If you cross-reference “Collection A”, Aleph includes the mention of “John Doe” in the cross-referencing and will find a match for it in “Collection B”. + + However, if you cross-reference “Collection B”, Aleph doesn't consider mentions when trying to find a match for the Person entity. + + As long as you only want to compare the mentions in one specific collection against entities (but not mentions) in another collection, Aleph’s cross-ref should be able to do that. If you want to compare entities in a specific collection against other entities and mentions in other collections, you will have to do that yourself. + + If you have a limited number of collection, one option might be to fetch all mentions and automatically create entities for each mention using the API. + + To fetch a list of mentions for a collection you can use the `/api/2/entities?filter:collection_id=137&filter:schemata=Mention` API request. + +* New: [Alephclient cli tool.](aleph.md#alephclient-cli-tool) + + alephclient is a command-line client for Aleph. It can be used to bulk import structured data and files and more via the API, without direct access to the server. + + **[Installation](https://docs.aleph.occrp.org/developers/how-to/data/install-alephclient/#how-to-install-the-alephclient-cli)** + + You can now install `alephclient` using pip although I recommend to use `pipx` instead: + + ```bash + pipx install alephclient + ``` + + `alephclient` needs to know the URL of the Aleph instance to connect to. For privileged operations (e.g. accessing private datasets or writing data), it also needs your API key. You can find your API key in your user profile in the Aleph UI. + + Both settings can be provided by setting the environment variables `ALEPHCLIENT_HOST` and `ALEPHCLIENT_API_KEY`, respectively, or by passing them in with `--host` and `--api-key` options. + + ```bash + export ALEPHCLIENT_HOST=https://aleph.occrp.org/ + export ALEPHCLIENT_API_KEY=YOUR_SECRET_API_KEY + ``` + + You can now start using `alephclient` for example to upload an entire directory to Aleph. + + **[Upload an entire directory to Aleph](https://docs.aleph.occrp.org/developers/how-to/data/upload-directory/)** + While you can upload multiple files and even entire directories at once via the Aleph UI, using the `alephclient` CLI allows you to upload files in bulk much quicker and more reliable. + + Run the following `alephclient` command to upload an entire directory to Aleph: + + ```bash + alephclient crawldir --foreign-id wikileaks-cable /Users/sunu/data/cable + ``` + + This will upload all files in the directory `/Users/sunu/data/cable` (including its subdirectories) into an investigation with the foreign ID `wikileaks-cable`. If no investigation with this foreign ID exists, a new investigation is created (in theory, but it didn't work for me, so manually create the investigation and then copy it's foreign ID). + + If you’d like to import data into an existing investigation and do not know its foreign ID, you can find the foreign ID in the Aleph UI. Navigate to the investigation homepage. The foreign ID is listed in the sidebar on the right. + +* New: [Other tools for the ecosystem.](aleph.md#other-tools-for-the-ecosystem) +* New: [Available datasets.](aleph.md#available-datasets) + + OpenSanctions helps investigators find leads, allows companies to manage risk and enables technologists to build data-driven products. + + You can check [their datasets](https://www.opensanctions.org/datasets/). + +* New: [Offshore-graph.](aleph.md#offshore-graph) + + [offshore-graph](https://github.com/opensanctions/offshore-graph) contains scripts that will merge the OpenSanctions Due Diligence dataset with the ICIJ OffshoreLeaks database in order create a combined graph for analysis. + + The result is a Cypher script to load the full graph into the Neo4J database and then browse it using the Linkurious investigation platform. + + Based on name-based entity matching between the datasets, an analyst can use this graph to find offshore holdings linked to politically exposed and sanctioned individuals. + + As a general alternative, you can easily export and convert entities from an Aleph instance to visualize them in Neo4j or Gephi using the ftm CLI: https://docs.aleph.occrp.org/developers/how-to/data/export-network-graphs/ + +## Torrent management + +### [qBittorrent](qbittorrent.md) + +* New: [Troubleshoot Trackers stuck on Updating.](qbittorrent.md#trackers-stuck-on-updating) + + Sometimes the issue comes from an improvable configuration. In advanced: + + - Ensure that there are enough [Max concurrent http announces](https://github.com/qbittorrent/qBittorrent/issues/15744): I changed from 50 to 500 + - [Select the correct interface and Optional IP address to bind to](https://github.com/qbittorrent/qBittorrent/issues/14453). In my case I selected `tun0` as I'm using a vpn and `All IPv4 addresses` as I don't use IPv6. + +### [Unpackerr](unpackerr.md) + +* New: [Completed item still waiting no extractable files found at.](unpackerr.md#completed-item-still-waiting-no-extractable-files-found-at) + + This trace in the logs (which is super noisy) is not to worry. + + Unpackerr is just telling you something is stuck in your sonar queue. It's not an error, and it's not trying to extract it (because it has no compressed files). The fix is to figure out why it's stuck in the queue. + + +# Health + +## [Teeth](teeth.md) + +* New: Suggestion on how to choose the toothpaste to buy. + + When choosing a toothpaste choose the one that has a higher percent of fluoride. + +# Coding + +## Languages + +### [Bash snippets](bash_snippets.md) + +* New: [Show the progresion of a long running task with dots.](bash_snippets.md#show-the-progresion-of-a-long-running-task-with-dots) + + ```bash + echo -n "Process X is running." + + sleep 1 + echo -n "." + sleep 1 + echo -n "." + + echo "" + ``` + +* New: [Self delete shell script.](bash_snippets.md#self-delete-shell-script) + + Add at the end of the script + + ```bash + rm -- "$0" + ``` + + `$0` is a magic variable for the full path of the executed script. + +* New: [Add a user to the sudoers through command line.](bash_snippets.md#add-a-user-to-the-sudoers-through-command-line-) + + Add the user to the sudo group: + + ```bash + sudo usermod -a -G sudo + ``` + + The change will take effect the next time the user logs in. + + This works because `/etc/sudoers` is pre-configured to grant permissions to all members of this group (You should not have to make any changes to this): + + ```bash + %sudo ALL=(ALL:ALL) ALL + ``` + +* New: [Error management done well in bash.](bash_snippets.md#error-management-done-well-in-bash) + + If you wish to capture error management in bash you can use the next format + + ```bash + if ( ! echo "$EMAIL" >> "$USER_TOTP_FILE" ) then + echo "** Error: could not associate email for user $USERNAME" + exit 1 + fi + ``` + +* New: [Compare two semantic versions.](bash_snippets.md#compare-two-semantic-versions) + + [This article](https://www.baeldung.com/linux/compare-dot-separated-version-string) gives a lot of ways to do it. For my case the simplest is to use `dpkg` to compare two strings in dot-separated version format in bash. + + ```bash + Usage: dpkg --compare-versions + ``` + + If the condition is `true`, the status code returned by `dpkg` will be zero (indicating success). So, we can use this command in an `if` statement to compare two version numbers: + ```bash + $ if $(dpkg --compare-versions "2.11" "lt" "3"); then echo true; else echo false; fi + true + ``` + +* New: [Exclude list of extensions from find command.](bash_snippets.md#exclude-list-of-extensions-from-find-command-) + + ```bash + find . -not \( -name '*.sh' -o -name '*.log' \) + ``` + +* New: [Do relative import of a bash library.](bash_snippets.md#do-relative-import-of-a-bash-library) + + If you want to import a file `lib.sh` that lives in the same directory as the file that is importing it you can use the next snippet: + + ```bash + source "$(dirname "$(realpath "$0")")/lib.sh" + ``` + + If you use `source ./lib.sh` you will get an import error if you run the script on any other place that is not the directory where `lib.sh` lives. + +* New: [Check the battery status.](bash_snippets.md#check-the-battery-status) + + This [article gives many ways to check the status of a battery](https://www.howtogeek.com/810971/how-to-check-a-linux-laptops-battery-from-the-command-line/), for my purposes the next one is enough + + ```bash + cat /sys/class/power_supply/BAT0/capacity + ``` + feat(bash_snippets#Check if file is being sourced): Check if file is being sourced + + Assuming that you are running bash, put the following code near the start of the script that you want to be sourced but not executed: + + ```bash + if [ "${BASH_SOURCE[0]}" -ef "$0" ] + then + echo "Hey, you should source this script, not execute it!" + exit 1 + fi + ``` + + Under bash, `${BASH_SOURCE[0]}` will contain the name of the current file that the shell is reading regardless of whether it is being sourced or executed. + + By contrast, `$0` is the name of the current file being executed. + + `-ef` tests if these two files are the same file. If they are, we alert the user and exit. + + Neither `-ef` nor `BASH_SOURCE` are POSIX. While `-ef` is supported by ksh, yash, zsh and Dash, BASH_SOURCE requires bash. In zsh, however, `${BASH_SOURCE[0]}` could be replaced by `${(%):-%N}`. + +* New: [Parsing bash arguments.](bash_snippets.md#parsing-bash-arguments) + + Long story short, it's nasty, think of using a python script with [typer](typer.md) instead. + + There are some possibilities to do this: + + - [The old getops](https://www.baeldung.com/linux/bash-parse-command-line-arguments) + - [argbash](https://github.com/matejak/argbash) library + - [Build your own parser](https://medium.com/@Drew_Stokes/bash-argument-parsing-54f3b81a6a8f) + +* New: [Fix docker error: KeyError ContainerConfig.](bash_snippets.md#fix-docker-error:-keyerror-containerconfig) + + You need to run `docker-compose down` and then up again. + +* New: [Set static ip with nmcli.](bash_snippets.md#set-static-ip-with-nmcli) + + ```bash + nmcli con mod "your-ssid" ipv4.addresses + ipv4.method "manual" \ + ipv4.addresses "your_desired_ip" \ + ipv4.gateway "your_desired_gateway" \ + ipv4.dns "1.1.1.1,2.2.2.2" \ + ipv4.routes "192.168.32.0 0.0.0.0" \ + ``` + + The last one is to be able to connect to your LAN, change the value accordingly. + +* New: [Fix unbound variable error.](bash_snippets.md#fix-unbound-variable-error) + + You can check if the variable is set and non-empty with: + ```bash + [ -n "${myvariable-}" ] + ``` + +* New: [Compare two semantic versions with sort.](bash_snippets.md#with-sort) + + If you want to make it work in non-Debian based systems you can use `sort -V -C` + + ```bash + printf "2.0.0\n2.1.0\n" | sort -V -C # Return code 0 + printf "2.2.0\n2.1.0\n" | sort -V -C # Return code 1 + ``` + +### [Bash testing](bats.md) + +* New: Introduce bats. + + Bash Automated Testing System is a TAP-compliant testing framework for Bash 3.2 or above. It provides a simple way to verify that the UNIX programs you write behave as expected. + + A Bats test file is a Bash script with special syntax for defining test cases. Under the hood, each test case is just a function with a description. + + ```bash + + @test "addition using bc" { + result="$(echo 2+2 | bc)" + [ "$result" -eq 4 ] + } + + @test "addition using dc" { + result="$(echo 2 2+p | dc)" + [ "$result" -eq 4 ] + } + ``` + + Bats is most useful when testing software written in Bash, but you can use it to test any UNIX program. + + References: + - [Source](https://github.com/bats-core/bats-core) + - [Docs](https://bats-core.readthedocs.io/) + +### [aiocron](aiocron.md) + +* New: Introduce aiocron. + + [`aiocron`](https://github.com/gawel/aiocron?tab=readme-ov-file) is a python library to run cron jobs in python asyncronously. + + **Usage** + + You can run it using a decorator + + ```python + >>> import aiocron + >>> import asyncio + >>> + >>> @aiocron.crontab('*/30 * * * *') + ... async def attime(): + ... print('run') + ... + >>> asyncio.get_event_loop().run_forever() + ``` + + Or by calling the function yourself + + ```python + >>> cron = crontab('0 * * * *', func=yourcoroutine, start=False) + ``` + + [Here's a simple example](https://stackoverflow.com/questions/65551736/python-3-9-scheduling-periodic-calls-of-async-function-with-different-paramete) on how to run it in a script: + + ```python + import asyncio + from datetime import datetime + import aiocron + + async def foo(param): + print(datetime.now().time(), param) + + async def main(): + cron_min = aiocron.crontab('*/1 * * * *', func=foo, args=("At every minute",), start=True) + cron_hour = aiocron.crontab('0 */1 * * *', func=foo, args=("At minute 0 past every hour.",), start=True) + cron_day = aiocron.crontab('0 9 */1 * *', func=foo, args=("At 09:00 on every day-of-month",), start=True) + cron_week = aiocron.crontab('0 9 * * Mon', func=foo, args=("At 09:00 on every Monday",), start=True) + + while True: + await asyncio.sleep(1) + + asyncio.run(main()) + ``` + + You have more complex examples [in the repo](https://github.com/gawel/aiocron/tree/master/examples) + + **Installation** + + ```bash + pip install aiocron + ``` + + **References** + - [Source](https://github.com/gawel/aiocron?tab=readme-ov-file) + +### [Configure Docker to host the application](lua.md) + +* New: [Inspect contents of Lua table in Neovim.](lua.md#inspect-contents-of-lua-table-in-neovim) + + When using Lua inside of Neovim you may need to view the contents of Lua tables, which are a first class data structure in Lua world. Tables in Lua can represent ordinary arrays, lists, symbol tables, sets, records, graphs, trees, etc. + + If you try to just print a table directly, you will get the reference address for that table instead of the content, which is not very useful for most debugging purposes: + + ```lua + :lua print(vim.api.nvim_get_mode()) + " table: 0x7f5b93e5ff88 + ``` + + To solve this, Neovim provides the `vim.inspect` function as part of its API. It serializes the content of any Lua object into a human readable string. + + For example you can get information about the current mode like so: + + ```lua + :lua print(vim.inspect(vim.api.nvim_get_mode())) + " { blocking = false, mode = "n"} + ``` + +* New: [Send logs to journald.](docker.md#send-logs-to-journald) + + The `journald` logging driver sends container logs to the systemd journal. Log entries can be retrieved using the `journalctl` command, through use of the journal API, or using the docker logs command. + + In addition to the text of the log message itself, the `journald` log driver stores the following metadata in the journal with each message: + | Field | Description | + | --- | ---- | + | CONTAINER_ID | The container ID truncated to 12 characters. | + | CONTAINER_ID_FULL | The full 64-character container ID. | + | CONTAINER_NAME | The container name at the time it was started. If you use docker rename to rename a container, the new name isn't reflected in the journal entries. | + | CONTAINER_TAG, | SYSLOG_IDENTIFIER The container tag ( log tag option documentation). | + | CONTAINER_PARTIAL_MESSAGE | A field that flags log integrity. Improve logging of long log lines. | + + To use the journald driver as the default logging driver, set the log-driver and log-opts keys to appropriate values in the `daemon.json` file, which is located in `/etc/docker/`. + + ```json + { + "log-driver": "journald" + } + ``` + + Restart Docker for the changes to take effect. + +* New: [Send the logs to loki.](docker.md#send-the-logs-to-loki) + + There are many ways to send logs to loki + + - Using the json driver and sending them to loki with promtail with the docker driver + - Using the docker plugin: Grafana Loki officially supports a Docker plugin that will read logs from Docker containers and ship them to Loki. + + I would not recommend to use this path because there is a known issue that deadlocks the docker daemon :S. The driver keeps all logs in memory and will drop log entries if Loki is not reachable and if the quantity of `max_retries` has been exceeded. To avoid the dropping of log entries, setting `max_retries` to zero allows unlimited retries; the driver will continue trying forever until Loki is again reachable. Trying forever may have undesired consequences, because the Docker daemon will wait for the Loki driver to process all logs of a container, until the container is removed. Thus, the Docker daemon might wait forever if the container is stuck. + + The wait time can be lowered by setting `loki-retries=2`, `loki-max-backoff_800ms`, `loki-timeout=1s` and `keep-file=true`. This way the daemon will be locked only for a short time and the logs will be persisted locally when the Loki client is unable to re-connect. + + To avoid this issue, use the Promtail Docker service discovery. + - Using the journald driver and sending them to loki with promtail with the journald driver. This has worked for me but the labels extracted are not that great. + +* New: [Solve syslog getting filled up with docker network recreation.](docker.md#syslog-getting-filled-up-with-docker-network-recreation) + + If you find yourself with your syslog getting filled up by lines similar to: + + ``` + Jan 15 13:19:19 home kernel: [174716.097109] eth2: renamed from veth0adb07e + Jan 15 13:19:20 home kernel: [174716.145281] IPv6: ADDRCONF(NETDEV_CHANGE): vethcd477bc: link becomes ready + Jan 15 13:19:20 home kernel: [174716.145337] br-1ccd0f48be7c: port 5(vethcd477bc) entered blocking state + Jan 15 13:19:20 home kernel: [174716.145338] br-1ccd0f48be7c: port 5(vethcd477bc) entered forwarding state + Jan 15 13:19:20 home kernel: [174717.081132] br-fbe765bc7d0a: port 2(veth31cdd6f) entered disabled state + Jan 15 13:19:20 home kernel: [174717.081176] vethc4da041: renamed from eth0 + Jan 15 13:19:21 home kernel: [174717.214911] br-fbe765bc7d0a: port 2(veth31cdd6f) entered disabled state + Jan 15 13:19:21 home kernel: [174717.215917] device veth31cdd6f left promiscuous mode + Jan 15 13:19:21 home kernel: [174717.215919] br-fbe765bc7d0a: port 2(veth31cdd6f) entered disabled state + ``` + + It probably means that some docker is getting recreated continuously. Those traces are normal logs of docker creating the networks, but as they do each time the docker starts, if it's restarting continuously then you have a problem. + +* New: [Minify the images.](docker.md#minify-the-images) + + [dive](https://github.com/wagoodman/dive) and [slim](https://github.com/slimtoolkit/slim) are two cli tools you can use to optimise the size of your dockers. + +### [Logging](python_logging.md) + +* New: [Configure the logging module to log directly to systemd's journal.](python_logging.md#configure-the-logging-module-to-log-directly-to-systemd's-journal) + + To use `systemd.journal` in Python, you need to install the `systemd-python` package. This package provides bindings for systemd functionality. + + Install it using pip: + + ```bash + pip install systemd-python + ``` + Below is an example Python script that configures logging to send messages to the systemd journal: + + ```python + import logging + from systemd.journal import JournalHandler + + logger = logging.getLogger('my_app') + logger.setLevel(logging.DEBUG) # Set the logging level + + journal_handler = JournalHandler() + journal_handler.setLevel(logging.DEBUG) # Adjust logging level if needed + journal_handler.addFilter( + lambda record: setattr(record, "SYSLOG_IDENTIFIER", "mbsync_syncer") or True + ) + + formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') + journal_handler.setFormatter(formatter) + + logger.addHandler(journal_handler) + + logger.info("This is an info message.") + logger.error("This is an error message.") + logger.debug("Debugging information.") + ``` + + When you run the script, the log messages will be sent to the systemd journal. You can view them using the `journalctl` command: + + ```bash + sudo journalctl -f + ``` + + This command will show the latest log entries in real time. You can filter by your application name using: + + ```bash + sudo journalctl -f -t my_app + ``` + + Replace `my_app` with the logger name you used (e.g., `'my_app'`). + + **Additional Tips** + - **Tagging**: You can add a custom identifier for your logs by setting `logging.getLogger('your_tag')`. This will allow you to filter logs using `journalctl -t your_tag`. + - **Log Levels**: You can control the verbosity of the logs by setting different levels (e.g., `DEBUG`, `INFO`, `WARNING`, `ERROR`, `CRITICAL`). + + **Example Output in the Systemd Journal** + + You should see entries similar to the following in the systemd journal: + + ``` + Nov 15 12:45:30 my_hostname my_app[12345]: 2024-11-15 12:45:30,123 - my_app - INFO - This is an info message. + Nov 15 12:45:30 my_hostname my_app[12345]: 2024-11-15 12:45:30,124 - my_app - ERROR - This is an error message. + Nov 15 12:45:30 my_hostname my_app[12345]: 2024-11-15 12:45:30,125 - my_app - DEBUG - Debugging information. + ``` + + This approach ensures that your logs are accessible through standard systemd tools and are consistent with other system logs. Let me know if you have any additional requirements or questions! + +### [SQLite](boto3.md) + +* New: [Get running instances.](boto3.md#get-running-instances) + + ```python + import boto3 + + ec2 = boto3.client('ec2') + + running_instances = [ + instance + for page in ec2.get_paginator('describe_instances').paginate() + for reservation in page['Reservations'] + for instance in reservation['Instances']] + if instance['State']['Name'] == 'running' + ] + ``` + +* New: [Order by a column descending.](sqlite.md#order-by-a-column-descending) + + ```sql + SELECT + select_list + FROM + table + ORDER BY + column_1 ASC, + column_2 DESC; + ``` + +### [Protocols](python_protocols.md) + +* New: Introduce Python Protocols. + + The Python type system supports two ways of deciding whether two objects are compatible as types: nominal subtyping and structural subtyping. + + Nominal subtyping is strictly based on the class hierarchy. If class Dog inherits class `Animal`, it’s a subtype of `Animal`. Instances of `Dog` can be used when `Animal` instances are expected. This form of subtyping subtyping is what Python’s type system predominantly uses: it’s easy to understand and produces clear and concise error messages, and matches how the native `isinstance` check works – based on class hierarchy. + + Structural subtyping is based on the operations that can be performed with an object. Class `Dog` is a structural subtype of class `Animal` if the former has all attributes and methods of the latter, and with compatible types. + + Structural subtyping can be seen as a static equivalent of duck typing, which is well known to Python programmers. See [PEP 544](https://peps.python.org/pep-0544/) for the detailed specification of protocols and structural subtyping in Python. + + **Usage** + + You can define your own protocol class by inheriting the special Protocol class: + + ```python + from typing import Iterable + from typing_extensions import Protocol + + class SupportsClose(Protocol): + # Empty method body (explicit '...') + def close(self) -> None: ... + + class Resource: # No SupportsClose base class! + + def close(self) -> None: + self.resource.release() + + # ... other methods ... + + def close_all(items: Iterable[SupportsClose]) -> None: + for item in items: + item.close() + + close_all([Resource(), open('some/file')]) # OK + ``` + + `Resource` is a subtype of the `SupportsClose` protocol since it defines a compatible close method. Regular file objects returned by `open()` are similarly compatible with the protocol, as they support `close()`. + + If you want to define a docstring on the method use the next syntax: + + ```python + def load(self, filename: Optional[str] = None) -> None: + """Load a configuration file.""" + ... + ``` + + **[Make protocols work with `isinstance`](https://mypy.readthedocs.io/en/stable/protocols.html#using-isinstance-with-protocols)** + To check an instance against the protocol using `isinstance`, we need to decorate our protocol with `@runtime_checkable` + + **[Make a protocol property variable](https://mypy.readthedocs.io/en/stable/protocols.html#invariance-of-protocol-attributes)** + + **[Make protocol of functions](https://mypy.readthedocs.io/en/stable/protocols.html#callback-protocols)** + + **References** + - [Mypy article on protocols](https://mypy.readthedocs.io/en/stable/protocols.html) + - [Predefined protocols reference](https://mypy.readthedocs.io/en/stable/protocols.html#predefined-protocol-reference) + +### [Logql](logql.md) + +* New: [Compare the values of a metric with the past.](logql.md#compare-the-values-of-a-metric-with-the-past) + + The offset modifier allows changing the time offset for individual range vectors in a query. + + For example, the following expression counts all the logs within the last ten minutes to five minutes rather than last five minutes for the MySQL job. Note that the offset modifier always needs to follow the range vector selector immediately. + + ```logql + count_over_time({job="mysql"}[5m] offset 5m) // GOOD + count_over_time({job="mysql"}[5m]) offset 5m // INVALID + ``` + +### [FastAPI](fastapi.md) + +* New: Launch the server from within python. + + ```python + import uvicorn + if __name__ == "__main__": + uvicorn.run("main:app", host="0.0.0.0", port=8000, reload=True) + ``` + +* New: Add the request time to the logs. + + For more information on changing the logging read [1](https://nuculabs.dev/p/fastapi-uvicorn-logging-in-production) + + To set the datetime of the requests [use this configuration](https://stackoverflow.com/questions/62894952/fastapi-gunicon-uvicorn-access-log-format-customization) + + ```python + @asynccontextmanager + async def lifespan(api: FastAPI): + logger = logging.getLogger("uvicorn.access") + console_formatter = uvicorn.logging.ColourizedFormatter( + "{asctime} {levelprefix} : {message}", style="{", use_colors=True + ) + logger.handlers[0].setFormatter(console_formatter) + yield + + api = FastAPI(lifespan=lifespan) + ``` + +### [Graphql](graphql.md) + +* New: Introduce GraphQL. + + [GraphQL](https://graphql.org/) is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools. + + To use it with python you can use [Ariadne](https://ariadnegraphql.org/) ([source](https://github.com/mirumee/ariadne)) + +### [Pytest](pytest.md) + +* New: [Changing the directory when running tests but switching it back after the test ends.](pytest.md#changing-the-directory-when-running-tests-but-switching-it-back-after-the-test-ends) + +### [nodejs](nodejs.md) + +* New: [Install using nvm.](nodejs.md#using-nvm) + + ```bash + curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.1/install.sh | bash + + nvm install 22 + + node -v # should print `v22.12.0` + + npm -v # should print `10.9.0` + ``` + +### [Python Snippets](python_snippets.md) + +* New: [Get unique items between two lists.](python_snippets.md#get-unique-items-between-two-lists) + + If you want all items from the second list that do not appear in the first list you can write: + + ``` + x = [1,2,3,4] + f = [1,11,22,33,44,3,4] + + result = set(f) - set(x) + ``` + +* New: [Pad number with zeros.](python_snippets.md#pad-number-with-zeros) + + ```python + number = 1 + print(f"{number:02d}") + ``` + +* New: [Parse a datetime from an epoch.](python_snippets.md#parse-a-datetime-from-an-epoch) + + ```python + >>> import datetime + >>> datetime.datetime.fromtimestamp(1347517370).strftime('%c') + '2012-09-13 02:22:50' + ``` + +* New: [Fix variable is unbound pyright error.](python_snippets.md#fix-variable-is-unbound-pyright-error) + + You may receive these warnings if you set variables inside if or try/except blocks such as the next one: + + ```python + def x(): + y = True + if y: + a = 1 + print(a) # "a" is possibly unbound + ``` + + The easy fix is to set `a = None` outside those blocks + + ```python + def x(): + a = None + y = True + if y: + a = 1 + print(a) # "a" is possibly unbound + ``` + +* New: [Investigate a class attributes.](python_snippets.md#investigate-a-class-attributes) + + [Investigate a class attributes with inspect](https://docs.python.org/3/library/inspect.html) + +* New: [Expire the cache of the lru_cache.](python_snippets.md#expire-the-cache-of-the-lru_cache) + + The `lru_cache` decorator caches forever, a way to prevent it is by adding one more parameter to your expensive function: `ttl_hash=None`. This new parameter is so-called "time sensitive hash", its the only purpose is to affect lru_cache. For example: + + ```python + from functools import lru_cache + import time + + @lru_cache() + def my_expensive_function(a, b, ttl_hash=None): + del ttl_hash # to emphasize we don't use it and to shut pylint up + return a + b # horrible CPU load... + + def get_ttl_hash(seconds=3600): + """Return the same value withing `seconds` time period""" + return round(time.time() / seconds) + + res = my_expensive_function(2, 2, ttl_hash=get_ttl_hash()) + ``` + +* New: [Kill a process by it's PID.](python_snippets.md#kill-a-process-by-it's-pid) + + ```python + import os + import signal + + os.kill(pid, signal.SIGTERM) #or signal.SIGKILL + ``` + +* New: [Convert the parameter of an API get request to a valid field.](python_snippets.md#convert-the-parameter-of-an-api-get-request-to-a-valid-field) + + For example if the argument has `/`: + + ```python + from urllib.parse import quote + + quote("value/with/slashes") + ``` + + Will return `value%2Fwith%2Fslashes` + +* New: [Get the type hints of an object.](python_snippets.md#get-the-type-hints-of-an-object) + + ```python + from typing import get_type_hints + + Student(NamedTuple): + name: Annotated[str, 'some marker'] + + get_type_hints(Student) == {'name': str} + get_type_hints(Student, include_extras=False) == {'name': str} + get_type_hints(Student, include_extras=True) == { + 'name': Annotated[str, 'some marker'] + } + ```` + +* New: [Type hints of a python module.](python_snippets.md#type-hints-of-a-python-module) + + ```python + from types import ModuleType + import os + + assert isinstance(os, ModuleType) + ``` + +* New: [Get all the classes of a python module.](python_snippets.md#get-all-the-classes-of-a-python-module) + + ```python + def _load_classes_from_directory(self, directory): + classes = [] + for file_name in os.listdir(directory): + if file_name.endswith(".py") and file_name != "__init__.py": + module_name = os.path.splitext(file_name)[0] + module_path = os.path.join(directory, file_name) + + # Import the module dynamically + spec = spec_from_file_location(module_name, module_path) + if spec is None or spec.loader is None: + raise ValueError( + f"Error loading the spec of {module_name} from {module_path}" + ) + module = module_from_spec(spec) + spec.loader.exec_module(module) + + # Retrieve all classes from the module + module_classes = inspect.getmembers(module, inspect.isclass) + classes.extend(module_classes) + ``` + +* New: [Import files from other directories.](python_snippets.md#import-files-from-other-directories) + + Add the directory where you have your function to `sys.path` + + ```python + import sys + + sys.path.append("**Put here the directory where you have the file with your function**") + + from file import function + ``` + +* New: [Use Path of pathlib write_text in append mode.](python_snippets.md#use-path-of-pathlib-write_text-in-append-mode) + + It's not supported you need to `open` it: + + ```python + with my_path.open("a") as f: + f.write("...") + ``` + +* New: [Suppress ANN401 for dynamically typed *args and **kwargs.](python_snippets.md#suppress-ann401-for-dynamically-typed-*args-and-**kwargs) + + Use `object` instead: + + ```python + def function(*args: object, **kwargs: object) -> None: + ``` + +* New: [One liner conditional.](python_snippets.md#one-liner-conditional) + + To write an if-then-else statement in Python so that it fits on one line you can use: + + ```python + fruit = 'Apple' + isApple = True if fruit == 'Apple' else False + ``` + +* New: [Get package data relative path.](python_snippets.md#get-package-data-relative-path) + + If you want to reference files from the foo/package1/resources folder you would want to use the __file__ variable of the module. Inside foo/package1/__init__.py: + + ```python + from os import path + resources_dir = path.join(path.dirname(__file__), 'resources') + ``` + +* New: [Compare file and directories.](python_snippets.md#compare-file-and-directories) + + The filecmp module defines functions to compare files and directories, with various optional time/correctness trade-offs. For comparing files, see also the difflib module. + + ```python + from filecmp import dircmp + + def print_diff_files(dcmp): + for name in dcmp.diff_files: + print("diff_file %s found in %s and %s" % (name, dcmp.left, dcmp.right)) + for sub_dcmp in dcmp.subdirs.values(): + print_diff_files(sub_dcmp) + dcmp = dircmp('dir1', 'dir2') + print_diff_files(dcmp) + ``` + +* New: [Send a linux desktop notification.](python_snippets.md#send-a-linux-desktop-notification) + + To show a Linux desktop notification from a Python script, you can use the `notify2` library (although [it's last commit was done on 2017](https://pypi.org/project/notify2/). This library provides an easy way to send desktop notifications on Linux. + + Alternatively, you can use the `subprocess` module to call the `notify-send` command-line utility directly. This is a more straightforward method but requires `notify-send` to be installed. + + ```python + import subprocess + + def send_notification(title: str, message: str = "", urgency: str = "normal") -> None: + """Send a desktop notification using notify-send. + + Args: + title (str): The title of the notification. + message (str): The message body of the notification. Defaults to an empty string. + urgency (str): The urgency level of the notification. Can be 'low', 'normal', or 'critical'. Defaults to 'normal'. + """ + subprocess.run(["notify-send", "-u", urgency, title, message]) + ``` + +* New: [Get the error string.](python_snippets.md#get-the-error-string) + + ```python + + import traceback + + def cause_error(): + return 1 / 0 # This will raise a ZeroDivisionError + + try: + cause_error() + except Exception as error: + # Capture the exception traceback as a string + error_message = "".join(traceback.format_exception(None, error, error.__traceback__)) + print("An error occurred:\n", error_message) + ``` + +### [Goodconf](goodconf.md) + +* New: [Initialize the config with a default value if the file doesn't exist.](goodconf.md#initialize-the-config-with-a-default-value-if-the-file-doesn't-exist) + + ```python + def load(self, filename: Optional[str] = None) -> None: + self._config_file = filename + if not self.store_dir.is_dir(): + log.warning("The store directory doesn't exist. Creating it") + os.makedirs(str(self.store_dir)) + if not Path(self.config_file).is_file(): + log.warning("The yaml store file doesn't exist. Creating it") + self.save() + super().load(filename) + + ``` + feat(goodconf#Config saving) + + So far [`goodconf` doesn't support saving the config](https://github.com/lincolnloop/goodconf/issues/12). Until it's ready you can use the next snippet: + + ```python + class YamlStorage(GoodConf): + """Adapter to store and load information from a yaml file.""" + + @property + def config_file(self) -> str: + """Return the path to the config file.""" + return str(self._config_file) + + @property + def store_dir(self) -> Path: + """Return the path to the store directory.""" + return Path(self.config_file).parent + + def reload(self) -> None: + """Reload the contents of the authentication store.""" + self.load(self.config_file) + + def load(self, filename: Optional[str] = None) -> None: + """Load a configuration file.""" + if not filename: + filename = f"{self.store_dir}/data.yaml" + super().load(self.config_file) + + def save(self) -> None: + """Save the contents of the authentication store.""" + with open(self.config_file, "w+", encoding="utf-8") as file_cursor: + yaml = YAML() + yaml.default_flow_style = False + yaml.dump(self.dict(), file_cursor) + ``` + feat(google_chrome#Open a specific profile): Open a specific profile + + ```bash + google-chrome --profile-directory="Profile Name" + ``` + + Where `Profile Name` is one of the profiles listed under `ls ~/.config/chromium | grep -i profile`. + +### [Inotify](python_inotify.md) + +* New: Introduce python_inotify. + + [inotify](https://pypi.org/project/inotify/) is a python library that acts as a bridge to the `inotify` linux kernel which allows you to register one or more directories for watching, and to simply block and wait for notification events. This is obviously far more efficient than polling one or more directories to determine if anything has changed. + + Installation: + + ```bash + pip install inotify + ``` + + Basic example using a loop: + + ```python + import inotify.adapters + + def _main(): + i = inotify.adapters.Inotify() + + i.add_watch('/tmp') + + with open('/tmp/test_file', 'w'): + pass + + for event in i.event_gen(yield_nones=False): + (_, type_names, path, filename) = event + + print("PATH=[{}] FILENAME=[{}] EVENT_TYPES={}".format( + path, filename, type_names)) + + if __name__ == '__main__': + _main() + ``` + + Output: + + ``` + PATH=[/tmp] FILENAME=[test_file] EVENT_TYPES=['IN_MODIFY'] + PATH=[/tmp] FILENAME=[test_file] EVENT_TYPES=['IN_OPEN'] + PATH=[/tmp] FILENAME=[test_file] EVENT_TYPES=['IN_CLOSE_WRITE'] + ``` + + Basic example without a loop: + + ```python + import inotify.adapters + + def _main(): + i = inotify.adapters.Inotify() + + i.add_watch('/tmp') + + with open('/tmp/test_file', 'w'): + pass + + events = i.event_gen(yield_nones=False, timeout_s=1) + events = list(events) + + print(events) + + if __name__ == '__main__': + _main() + ``` + + The wait will be done in the `list(events)` line + +* Correction: Deprecate inotify. + + DEPRECATED: As of 2024-11-15 it's been 4 years since the last commit. [watchdog](watchdog_python.md) has 6.6k stars and last commit was done 2 days ago. + +### [watchdog](watchdog_python.md) + +* New: Introduce watchdog. + + [watchdog](https://github.com/gorakhargosh/watchdog?tab=readme-ov-file) is a Python library and shell utilities to monitor filesystem events. + + Cons: + + - The [docs](https://python-watchdog.readthedocs.io/en/stable/api.html) suck. + + **Installation** + + ```bash + pip install watchdog + ``` + + **Usage** + + A simple program that uses watchdog to monitor directories specified as command-line arguments and logs events generated: + + ```python + import time + + from watchdog.events import FileSystemEvent, FileSystemEventHandler + from watchdog.observers import Observer + + class MyEventHandler(FileSystemEventHandler): + def on_any_event(self, event: FileSystemEvent) -> None: + print(event) + + event_handler = MyEventHandler() + observer = Observer() + observer.schedule(event_handler, ".", recursive=True) + observer.start() + try: + while True: + time.sleep(1) + finally: + observer.stop() + observer.join() + ``` + + **References** + - [Source](https://github.com/gorakhargosh/watchdog?tab=readme-ov-file) + - [Docs](https://python-watchdog.readthedocs.io) + + +### [Pydantic](pydantic.md) + +* New: Nicely show validation errors. + + A nice way of showing it is to capture the error and print it yourself: + + ```python + try: + model = Model( + state=state, + ) + except ValidationError as error: + log.error(f'Error building model with state {state}') + raise error + ``` + +* New: [Load a pydantic model from json.](pydantic.md#load-a-pydantic-model-from-json) + + You can use the [`model_validate_json`](https://docs.pydantic.dev/latest/api/base_model/#pydantic.main.BaseModel.model_validate_json) method that will validate and return an object with the loaded data. + + ```python + from datetime import date + + from pydantic import BaseModel, ConfigDict, ValidationError + + class Event(BaseModel): + model_config = ConfigDict(strict=True) + + when: date + where: tuple[int, int] + + json_data = '{"when": "1987-01-28", "where": [51, -1]}' + print(Event.model_validate_json(json_data)) + + try: + Event.model_validate({'when': '1987-01-28', 'where': [51, -1]}) + + except ValidationError as e: + print(e) + """ + 2 validation errors for Event + when + Input should be a valid date [type=date_type, input_value='1987-01-28', input_type=str] + where + Input should be a valid tuple [type=tuple_type, input_value=[51, -1], input_type=list] + """ + ``` + +* New: Create part of the attributes in the initialization stage. + + ```python + class Sqlite(BaseModel): + model_config = ConfigDict(arbitrary_types_allowed=True) + + path: Path + db: sqlite3.Cursor + + def __init__(self, **kwargs): + conn = sqlite3.connect(kwargs['path']) + kwargs['db'] = conn.cursor() + super().__init__(**kwargs) + ``` + +### [psycopg2](psycopg2.md) + +* New: Introduce psycopg2. + + **Installation** + + Install the dependencies: + + ```bash + sudo apt install libpq-dev python3-dev + ``` + + Then install the package + + ```bash + pip install psycopg2 + ``` + +### [questionary](questionary.md) + +* New: [Unit testing questionary code.](questionary.md#unit-testing) + + Testing `questionary` code can be challenging because it involves interactive prompts that expect user input. However, there are ways to automate the testing process. You can use libraries like `pexpect`, `pytest`, and `pytest-mock` to simulate user input and test the behavior of your code. + + Here’s how you can approach testing `questionary` code using `pytest-mock` to mock `questionary` functions + + You can mock `questionary` functions like `questionary.select().ask()` to simulate user choices without actual user interaction. + + **Testing a single `questionary.text` prompt** + + Let's assume you have a function that asks the user for their name: + + ```python + import questionary + + def ask_name() -> str: + name = questionary.text("What's your name?").ask() + return name + ``` + + You can test this function by mocking the `questionary.text` prompt to simulate the user's input. + + ```python + import pytest + from your_module import ask_name + + def test_ask_name(mocker): + # Mock the text function to simulate user input + mock_text = mocker.patch('questionary.text') + + # Define the response for the prompt + mock_text.return_value.ask.return_value = "Alice" + + result = ask_name() + + assert result == "Alice" + ``` + + **Test a function that has many questions** + + Here’s an example of how to test a function that contains two `questionary.text` prompts using `pytest-mock`. + + Let's assume you have a function that asks for the first and last names of a user: + + ```python + import questionary + + def ask_full_name() -> dict: + first_name = questionary.text("What's your first name?").ask() + last_name = questionary.text("What's your last name?").ask() + return {"first_name": first_name, "last_name": last_name} + ``` + + You can mock both `questionary.text` calls to simulate user input for both the first and last names: + + ```python + import pytest + from your_module import ask_full_name + + def test_ask_full_name(mocker): + # Mock the text function for the first name prompt + mock_text_first = mocker.patch('questionary.text') + # Define the response for the first name prompt + mock_text_first.side_effect = ["Alice", "Smith"] + + result = ask_full_name() + + assert result == {"first_name": "Alice", "last_name": "Smith"} + ``` + + +### [rich](rich.md) + +* New: Adding a footer to a table. + + Adding a footer is not easy task. [This answer](https://github.com/Textualize/rich/discussions/2135) doesn't work anymore as `table` doesn't have the `add_footer`. You need to create the footer in the `add_column` so you need to have the data that needs to go to the footer before building the rows. + + You would do something like: + + ```python + table = Table(title="Star Wars Movies", show_footer=True) + table.add_column("Title", style="magenta", footer='2342') + ``` + +## Coding tools + +### [Singer](vim_foldings.md) + +* New: Introduce neotree. + + General keymaps: + + - ``: Open the file in the current buffer + - ``: Open in a vertical split + - ``: Open in an horizontal split + - ``: Navigate one directory up (even if it's the root of the `cwd`) + + File and directory management: + + - `a`: Create a new file or directory. Add a `/` to the end of the name to make a directory. + - `d`: Delete the selected file or directory + - `r`: Rename the selected file or directory + - `y`: Mark file to be copied (supports visual selection) + - `x`: Mark file to be cut (supports visual selection) + - `m`: Move the selected file or directory + - `c`: Copy the selected file or directory + + References: + + - [Docs](https://github.com/nvim-neo-tree/neo-tree.nvim/blob/main/doc/neo-tree.txt) + - [Wiki](https://github.com/nvim-neo-tree/neo-tree.nvim/wiki) + - [Wiki Recipes](https://github.com/nvim-neo-tree/neo-tree.nvim/wiki/Recipes) + +* New: [Show hidden files.](neotree.md#show-hidden-files) + + ```lua + return { + "nvim-neo-tree/neo-tree.nvim", + opts = { + filesystem = { + filtered_items = { + visible = true, + show_hidden_count = true, + hide_dotfiles = false, + hide_gitignored = true, + hide_by_name = { + '.git', + }, + never_show = {}, + }, + } + } + } + ``` + +* New: [Autoclose on open file.](neotree.md#autoclose-on-open-file) + + This example uses the file_open event to close the Neo-tree window when a file is opened. This applies to all windows and all sources at once. + + ```lua + require("neo-tree").setup({ + event_handlers = { + + { + event = "file_opened", + handler = function(file_path) + -- auto close + -- vimc.cmd("Neotree close") + -- OR + require("neo-tree.command").execute({ action = "close" }) + end + }, + + } + }) + ``` + +* New: [Configuring vim folds.](neotree.md#configuring-vim-folds) + + Copy the code under [implementation](https://github.com/nvim-neo-tree/neo-tree.nvim/wiki/Recipes#emulating-vims-fold-commands) in your config file. + +* New: [Can't copy file/directory to itself.](neotree.md#can't-copy-file/directory-to-itself) + + If you want to copy a directory you need to assume that the prompt is done from within the directory. So if you want to copy it to a new name at the same level you need to use `../new-name` instead of `new-name`. + +* New: Introduce the vim foldings workflow. + + ne way to easily work with folds is by using the [fold-cycle](https://github.com/jghauser/fold-cycle.nvim?tab=readme-ov-file) plugin to be able to press `` or `` to toggle a fold. + + If you're using [lazyvim](lazyvim.md) you can use the next configuration: + + ```lua + return { + { + "jghauser/fold-cycle.nvim", + config = function() + require("fold-cycle").setup() + end, + keys = { + { + "", + function() + return require("fold-cycle").open() + end, + desc = "Fold-cycle: open folds", + silent = true, + }, + { + "", + function() + return require("fold-cycle").open() + end, + desc = "Fold-cycle: open folds", + silent = true, + }, + { + "", + function() + return require("fold-cycle").close() + end, + desc = "Fold-cycle: close folds", + silent = true, + }, + { + "zC", + function() + return require("fold-cycle").close_all() + end, + remap = true, + silent = true, + desc = "Fold-cycle: close all folds", + }, + }, + }, + } + ``` + +* New: Introduce singer. + + [Singer](https://www.singer.io/) is an open-source standard for writing scripts that move data. + + It describes how data extraction scripts—called “taps” —and data loading scripts—called “targets”— should communicate, allowing them to be used in any combination to move data from any source to any destination. Send data between databases, web APIs, files, queues, and just about anything else you can think of. + + It has many "taps" and "targets" that can help you interact with third party tools without needing to write the code. + + **References** + - [Home](https://www.singer.io/) + +* New: [ is not well mapped.](vim.md#-is-not-well-mapped) + + It's because `` is a synonym of ``. + +### [Coding with AI](vim_movement.md) + +* New: Introduce LazyVim. + + - [Source](https://github.com/LazyVim/LazyVim) + - [Docs](https://lazyvim.github.io/) + - [Home](https://lazyvim.github.io/) + +* New: [Adding plugins configuration.](lazyvim.md#adding-plugins-configuration) + + Configuring LazyVim plugins is exactly the same as using `lazy.nvim` to build a config from scratch. + + For the full plugin spec documentation please check the [lazy.nvim readme](https://github.com/folke/lazy.nvim). + + LazyVim comes with a list of preconfigured plugins, check them [here](https://www.lazyvim.org/configuration/plugins) before diving on your own. + +* New: [Adding a plugin.](lazyvim.md#adding-a-plugin) + + Adding a plugin is as simple as adding the plugin spec to one of the files under `lua/plugins/*.lua``. You can create as many files there as you want. + + You can structure your `lua/plugins`` folder with a file per plugin, or a separate file containing all the plugin specs for some functionality. For example: `lua/plugins/lsp.lua` + + ```lua + return { + -- add symbols-outline + { + "simrat39/symbols-outline.nvim", + cmd = "SymbolsOutline", + keys = { { "cs", "SymbolsOutline", desc = "Symbols Outline" } }, + opts = { + -- add your options that should be passed to the setup() function here + position = "right", + }, + }, + } + ``` + + Customizing plugin specs. Defaults merging rules: + + - cmd: the list of commands will be extended with your custom commands + - event: the list of events will be extended with your custom events + - ft: the list of filetypes will be extended with your custom filetypes + - keys: the list of keymaps will be extended with your custom keymaps + - opts: your custom opts will be merged with the default opts + - dependencies: the list of dependencies will be extended with your custom dependencies + - any other property will override the defaults + + For ft, event, keys, cmd and opts you can instead also specify a values function that can make changes to the default values, or return new values to be used instead. + + ```lua + -- change trouble config + { + "folke/trouble.nvim", + -- opts will be merged with the parent spec + opts = { use_diagnostic_signs = true }, + } + + -- add cmp-emoji + { + "hrsh7th/nvim-cmp", + dependencies = { "hrsh7th/cmp-emoji" }, + ---@param opts cmp.ConfigSchema + opts = function(_, opts) + table.insert(opts.sources, { name = "emoji" }) + end, + } + ``` + + Defining the plugin keymaps: + + Adding `keys=` follows the rules as explained above. You don't have to specify a mode for `normal` mode keymaps. + + You can also disable a default keymap by setting it to `false`. To override a keymap, simply add one with the same `lhs` and a new `rhs`. For example `lua/plugins/telescope.lua` + + ```lua + return { + "nvim-telescope/telescope.nvim", + keys = { + -- disable the keymap to grep files + {"/", false}, + -- change a keymap + { "ff", "Telescope find_files", desc = "Find Files" }, + -- add a keymap to browse plugin files + { + "fp", + function() require("telescope.builtin").find_files({ cwd = require("lazy.core.config").options.root }) end, + desc = "Find Plugin File", + }, + }, + }, + ``` + + Make sure to use the exact same mode as the keymap you want to disable. + + ```lua + return { + "folke/flash.nvim", + keys = { + -- disable the default flash keymap + { "s", mode = { "n", "x", "o" }, false }, + }, + } + ``` + You can also return a whole new set of keymaps to be used instead. Or return `{}` to disable all keymaps for a plugin. + + ```lua + return { + "nvim-telescope/telescope.nvim", + -- replace all Telescope keymaps with only one mapping + keys = function() + return { + { "ff", "Telescope find_files", desc = "Find Files" }, + } + end, + }, + ``` + +* New: [Auto update plugins.](lazyvim.md#auto-update-plugins) + + Add this to `~/.config/nvim/lua/config/autocomds.lua` + + ```lua + local function augroup(name) + return vim.api.nvim_create_augroup("lazyvim_" .. name, { clear = true }) + end + + vim.api.nvim_create_autocmd("VimEnter", { + group = augroup("autoupdate"), + callback = function() + if require("lazy.status").has_updates then + require("lazy").update({ show = false }) + end + end, + }) + ``` + +* New: Introduce vim keymaps. + + LazyVim comes with some sane default keybindings, you can see them [here](https://github.com/LazyVim/LazyVim/blob/main/lua/lazyvim/config/keymaps.lua). You don't need to remember them all, it also comes with [which-key](https://github.com/folke/which-key.nvim) to help you remember your keymaps. Just press any key like and you'll see a popup with all possible keymaps starting with . + + - default `` is `` + - default `` is `\` + + General editor bindings: + + - Save file: `` + - Quit all: `qq` + - Open a floating terminal: `` + + Movement keybindings: + + - Split the windows: + - Vertically: `wd` + - To move around the windows you can use: , , , . + - To resize the windows use: , , , + - To move between buffers: + - Next and previous with , + - Switch to the previously opened buffer: `bb` + + Coding keybindings: + + Diagnostics: + + - `cd>`: Shows you the diagnostics message of the current line in a floating window + - `]d` and `[d`: iterates over all diagnostics + - `]e` and `[e`: iterates over all error diagnostics + - `]w` and `[w`: iterates over all warning diagnostics + +* New: [Setting keymaps in lua.](vim_keymaps.md#setting-keymaps-in-lua) + + If you need to set keymaps in lua you can use `vim.keymap.set`. For example: + + ```lua + vim.keymap.set('n', 'w', 'write', {desc = 'Save'}) + ``` + + After executing this, the sequence `Space + w` will call the `write` command. Basically, we can save changes made to a file with `Space + w`. + + Let's dive into what does the `vim.keymap.set` parameters mean. + + ```lua + vim.keymap.set({mode}, {lhs}, {rhs}, {opts}) + ``` + + * `{mode}`: mode where the keybinding should execute. It can be a list of modes. We need to speify the mode's short name. Here are some of the most common. + * `n`: Normal mode. + * `i`: Insert mode. + * `x`: Visual mode. + * `s`: Selection mode. + * `v`: Visual + Selection. + * `t`: Terminal mode. + * `o`: Operator-pending. + * `''`: Yes, an empty string. Is the equivalent of `n + v + o`. + + * `{lhs}`: is the key we want to bind. + * `{rhs}` is the action we want to execute. It can be a string with a command or an expression. You can also provide a lua function. + * `{opts}` this must be a lua table. If you don't know what is a "lua table" just think is a way of storing several values in one place. Anyway, it can have these properties. + + * `desc`: A string that describes what the keybinding does. You can write anything you want. + * `remap`: A boolean that determines if our keybinding can be recursive. The default value is `false`. Recursive keybindings can cause some conflicts if used incorrectly. Don't enable it unless you know what you're doing. + * `buffer`: It can be a boolean or a number. If we assign the boolean `true` it means the keybinding will only be effective in the current file. If we assign a number, it needs to be the "id" of an open buffer. + * `silent`: A boolean. Determines whether or not the keybindings can show a message. The default value is `false`. + * `expr`: A boolean. If enabled it gives the chance to use vimscript or lua to calculate the value of `{rhs}`. The default value is `false`. + +* New: [The leader key.](vim_keymaps.md#the-leader-key) + + When creating keybindings we can use the special sequence `` in the `{lhs}` parameter, it'll take the value of the global variable `mapleader`. + + So `mapleader` is a global variable in vimscript that can be string. For example. + + ```lua + vim.g.mapleader = ' ' + ``` + + After defining it we can use it as a prefix in our keybindings. + + ```lua + vim.keymap.set('n', 'w', 'write') + ``` + + This will make `` + `w` save the current file. + + There are different opinions on what key to use as the `` key. The `` is the most comfortable as it's always close to your thumbs, and it works well with both hands. Nevertheless, you can only use it in normal mode, because in insert `` will be triggered as you write. An alternative is to use `;` which is also comfortable (if you use the english key distribution) and you can use it in insert mode. + + If you [want to define more than one leader key](https://stackoverflow.com/questions/30467660/can-we-define-more-than-one-leader-key-in-vimrc) you can either: + + * Change the `mapleader` many times in your file: As the value of `mapleader` is used at the moment the mapping is defined, you can indeed change that while plugins are loading. For that, you have to explicitly `:runtime` the plugins in your `~/.vimrc` (and count on the canonical include guard to prevent redefinition later): + + ```vim + let mapleader = ',' + runtime! plugin/NERD_commenter.vim + runtime! ... + let mapleader = '\' + runime! plugin/mark.vim + ... + ``` + * Use the keys directly instead of using `` + + ```vim + " editing mappings + nnoremap ,a + nnoremap ,k + nnoremap ,d + + " window management mappings + nnoremap gw + nnoremap gb + ``` + + Defining `mapleader` and/or using `` may be useful if you change your mind often on what key to use a leader but it won't be of any use if your mappings are stable. + +* New: Configure vim from scratch. + + Neovim configuration is a **complex** thing to do, both to start and to maintain. The configurations are endless, the plugins are too. Be ready to spend a lot of energy on it and to get lost reading a lot. + + If I'm scaring you, you are right to be scared! xD. Once you manage to get it configured to your liking you'll think that in the end it doesn't even matter spending all that time. However if you're searching for something that is plug and play try [vscodium](vscodium.md). + + To make things worse, the configuration [is done in lua](#configuration-done-in-Lua), so you may need a [small refreshment](lua.md) to understand what are you doing. + +* New: [Vim distributions.](vim_config.md#vim-distributions) + + One way to make vim's configuration more bearable is to use vim distributions. These are projects that maintain configurations with sane defaults and that work with the whole ecosystem of plugins. + + Using them is the best way to: + + - Have something usable fast + - Minimize the maintenance efforts as others are doing it for you (plugin changes, breaking changes, ..) + - Keep updated with the neovim ecosystem, as you can see what is the community adding to the default config. + + However, there are so many good Neovim configuration distributions that it becomes difficult for a Neovim user to decide which distribution to use and how to tailor it for their use case. + + By far, the top 5 Neovim configuration distributions are [AstroNvim](https://github.com/AstroNvim/AstroNvim), [kickstart](https://github.com/nvim-lua/kickstart.nvim), [LazyVim](https://github.com/LazyVim/LazyVim), [LunarVim](https://github.com/LunarVim/LunarVim), and [NvChad](https://github.com/NvChad/NvChad). That is not to say these are the “best” configuration distributions, simply that they are the most popular. + + Each of these configuration distributions has value. They all provide excellent starting points for crafting your own custom configuration, they are all extensible and fairly easy to learn, and they all provide an out-of-the-box setup that can be used effectively without modification. + + Distinguishing features of the top Neovim configuration distributions are: + + - AstroNvim: + + - An excellent community repository + - Fully featured out-of-the-box + - Good documentation + + - kickstart + + - Minimal out-of-the-box setup + - Easy to extend and widely used as a starting point + - A good choice if your goal is hand-crafting your own config + + - LazyVim + + - Very well maintained by the author of lazy.nvim + - Nice architecture, it’s a plugin with which you can import preconfigured plugins + - Good documentation + + - LunarVim + + - Well maintained and mature + - Custom installation processs installs LunarVim in an isolated location + - Been around a while, large community, widespread presence on the web + + - NvChad + + - Really great base46 plugin enables easy theme/colorscheme management + - Includes an impressive mappings cheatsheet + - ui plugin and nvim-colorizer + + Personally I tried LunarVim and finally ended up with LazyVim because: + + - It's more popular + - I like the file structure + - It's being maintained by [folke](https://github.com/folke) one of the best developers of neovim plugins. + +* New: [Starting your configuration with LazyVim.](vim_config.md#starting-your-configuration-with-lazyvim) + + [Installing the requirements](https://www.lazyvim.org/): + + LazyVim needs the next tools to be able to work: + + - Neovim >= 0.9.0 (needs to be built with LuaJIT). Follow [these instructions](vim.md#installation) + - Git >= 2.19.0 (for partial clones support). `sudo apt-get install git`. + - a [Nerd Font(v3.0 or greater)](https://www.nerdfonts.com/) (optional, but strongly suggested as they rae needed to display some icons). Follow [these instructions if you're using kitty](kitty.md#fonts). + - lazygit (optional and I didn't like it) + - a C compiler for nvim-treesitter. `apt-get install gcc` + - for telescope.nvim (optional) + - live grep: `ripgrep` + - find files: `fd` + - a terminal that support true color and undercurl: + - [kitty (Linux & Macos)](kitty.md) + - wezterm (Linux, Macos & Windows) + - alacritty (Linux, Macos & Windows) + - iterm2 (Macos) + + [Install the starter](https://www.lazyvim.org/installation): + + - Make a backup of your current Neovim files: + ```bash + # required + mv ~/.config/nvim{,.old} + + # optional but recommended + mv ~/.local/share/nvim{,.old} + mv ~/.local/state/nvim{,.old} + mv ~/.cache/nvim{,.old} + ``` + - Clone the starter + + ```bash + git clone https://github.com/LazyVim/starter ~/.config/nvim + ``` + + - Remove the `.git` folder, so you can add it to your own repo later + + ```bash + rm -rf ~/.config/nvim/.git + ``` + + - Start Neovim! + + ```bash + nvim + ``` + - It is recommended to run `:LazyHealth` after installation. This will load all plugins and check if everything is working correctly. + + [Understanding the file structure](https://www.lazyvim.org/configuration): + + The files under `config` will be automatically loaded at the appropriate time, so you don't need to require those files manually. + + You can add your custom plugin specs under `lua/plugins/`. All files there will be automatically loaded by lazy.nvim. + + ``` + ~/.config/nvim + ├── lua + │ ├── config + │ │ ├── autocmds.lua + │ │ ├── keymaps.lua + │ │ ├── lazy.lua + │ │ └── options.lua + │ └── plugins + │ ├── spec1.lua + │ ├── ** + │ └── spec2.lua + └── init.toml + ``` + The files `autocmds.lua`, `keymaps.lua`, `lazy.lua` and `options.lua` under `lua/config` will be automatically loaded at the appropriate time, so you don't need to require those files manually. LazyVim comes with a set of default config files that will be loaded before your own. + + You can continue your config by [adding plugins](lazyvim.md). + +* New: Introduce the vim movement workflow. + + Moving around vim can be done in many ways, which an lead to being lost on how to do it well. + + LazyVim has [a very nice way to deal with buffers](https://www.lazyvim.org/configuration/tips#navigating-around-multiple-buffers) + - Use `H` and `L` if the buffer you want to go to is visually close to where you are. + - Otherwise, if the buffer is open, use `,` + - For other files, use `` + - Close buffers you no longer need with `bd` + - `ss` to quickly jump to a function in the buffer you're on + - Using the [jump list](#Using-the-jump-list) with ``, `` and `gd` to navigate the code + - You can pin buffers with `bp` and delete all non pinned buffers with `bP` + +* New: [Using the jump list.](vim_movement.md#using-the-jump-list) + + Vim has a feature called the “Jump List”, which saves all the locations you’ve recently visited, including their line number, column number, and what else not in the `.viminfo` file, to help you get exactly the position you were last in. Not only does it save the locations in your current buffer, but also previous buffers you may have edited in other Vim sessions. Which means, if you’re currently working on a file, and there aren’t many last-location saves in this one, you’ll be redirected to the previous file you had edited. But how do you do that? Simply press `Ctrl + O`, and it’ll get you back to the previous location you were in, or more specifically, your cursor was in. + + If you want to go back to the newer positions, after you’re done with what you wanted to do, you can then press `Ctrl + i` to go back to the newer position. This is exceptionally useful when you’re working with a lot of project files at a time, and you need to go back and forth between multiple blocks in different files. This could instantly give you a boost, as you won’t need to have separate buffers opened up or windows to be setted up, you can simply jump between the files and edit them. + + Ctrl + O is probably not meant for a single task, as far as Vim’s philosophy is concerned. The jumping mentioned in the previous section only works when you’re in Normal Mode, and not when you’re in Insert Mode. When you press Ctrl + O in Insert Mode, what happens instead is that you’ll enter Normal Mode, and be able to execute a single command, after which Vim will automatically switch back to Insert Mode. + +* New: [Install using Lazyvim.](diffview.md#using-lazyvim) + + ```lua + return { + { + "sindrets/diffview.nvim", + dependencies = { + { "nvim-tree/nvim-web-devicons", lazy = true }, + }, + + keys = { + { + "dv", + function() + if next(require("diffview.lib").views) == nil then + vim.cmd("DiffviewOpen") + else + vim.cmd("DiffviewClose") + end + end, + desc = "Toggle Diffview window", + }, + }, + }, + } + ``` + + Which sets the next bindings: + - `dv`: [Toggle the opening and closing of the diffview windows](https://www.reddit.com/r/neovim/comments/15remc4/how_to_exit_all_the_tabs_in_diffviewnvim/?rdt=52076) + +* New: [Use diffview as merge tool.](diffview.md#use-diffview-as-merge-tool) + + Add to your `~/.gitconfig`: + + ```ini + [alias] + mergetool = "!nvim -c DiffviewOpen" + ``` + +* New: [Resolve merge conflicts.](diffview.md#resolve-merge-conflicts) + + If you call `:DiffviewOpen` during a merge or a rebase, the view will list the conflicted files in their own section. When opening a conflicted file, it will open in a 3-way diff allowing you to resolve the conflict with the context of the target branch's version (OURS, left), and the version from the branch which is being merged (THEIRS, right). + + The conflicted file's entry in the file panel will show the remaining number of conflict markers (the number following the file name). If what follows the file name is instead an exclamation mark (`!`), this indicates that the file has not yet been opened, and the number of conflicts is unknown. If the sign is a check mark, this indicates that there are no more conflicts in the file. + + You can interact with the merge tool with the next bindings: + + - `]x` and `[x`: Jump between conflict markers. This works from the file panel as well. + - `dp`: Put the contents on the other buffer + - `do`: Get the contents from the other buffer + - `2do`: to obtain the hunk from the OURS side of the diff + - `3do` to obtain the hunk from the THEIRS side of the diff + - `1do` to obtain the hunk from the BASE in a 4-way diff + + Additionally there are mappings for operating directly on the conflict + markers: + + - `co`: Choose the OURS version of the conflict. + - `ct`: Choose the THEIRS version of the conflict. + - `cb`: Choose the BASE version of the conflict. + - `ca`: Choose all versions of the conflict (effectively + just deletes the markers, leaving all the content). + - `dx`: Choose none of the versions of the conflict (delete the + conflict region). + +* New: Introduce ai coding prompts. + + These are some useful AI prompts to help you while you code: + + - create a function with type hints and docstring using google style called { } that { } + - create the tests for the function { } adding type hints and following the AAA style where the Act section is represented contains a returns = (thing to test) line or if the function to test doesn't return any value append an # act comment at the end of the line. Use paragraphs to separate the AAA blocks and don't add comments inside the tests for the sections + + If you use [espanso](espanso.md) you can simplify the filling up of these prompts on the AI chats. For example: + + ```yaml + --- + matches: + - trigger: :function + form: | + Create a function with type hints and docstring using google style called [[name]] that: + [[text]] + form_fields: + text: + multiline: true + - trigger: :tweak + form: | + Tweak the next code: + [[code]] + + So that: + + [[text]] + form_fields: + text: + multiline: true + code: + multiline: true + - trigger: :test + form: | + create the tests for the function: + [[text]] + + Following the next guidelines: + + - Add type hints + - Follow the AAA style + - In the Act section if the function to test returns a value always name that variable returns. If the function to test doesn't return any value append an # act comment at the end of the line. + - Use paragraphs to separate the AAA blocks and don't add comments like # Arrange or # Act or # Act/Assert or # Assert + + form_fields: + text: + multiline: true + - trigger: :refactor + form: | + Refactor the next code + [[code]] + with the next conditions + [[conditions]] + form_fields: + code: + multiline: true + conditions: + multiline: true + ``` + +* New: Introduce Kestra. + + [Kestra](https://kestra.io/) is an [open-source orchestrator](data_orchestrator.md) designed to bring Infrastructure as Code (IaC) best practices to all workflows — from those orchestrating mission-critical operations, business processes, and data pipelines to simple Zapier-style automation. Built with an API-first philosophy, Kestra enables users to define and manage data pipelines through a simple YAML configuration file. This approach frees you from being tied to a specific client implementation, allowing for greater flexibility and easier integration with various tools and services. + + Look at this [4 minute video](https://www.youtube.com/watch?v=h-P0eK2xN58) for a visual introduction + + **References** + - [Docs](https://kestra.io/docs/getting-started) + - [Home](https://kestra.io/) + - [4 minute introduction video](https://www.youtube.com/watch?v=h-P0eK2xN58) + +* New: Add new prompts for developers. + + ```yaml + - trigger: :polish + form: | + Polish the next code + [[code]] + with the next conditions: + - Use type hints on all functions and methods + - Add or update the docstring using google style on all functions and methods + form_fields: + code: + multiline: true + - trigger: :commit + form: | + Act as an expert developer. Create a message commit with the next conditions: + - follow semantic versioning + - create a semantic version comment per change + - include all comments in a raw code block so that it's easy to copy + + for the following diff + [[text]] + form_fields: + text: + multiline: true + ``` + +* Correction: Update the ai prompts. + + ```yaml + matches: + - trigger: :function + form: | + Create a function with: + - type hints + - docstrings for all classes, functions and methods + - docstring using google style with line length less than 89 characters + - adding logging traces using the log variable log = logging.getLogger(__name__) + - Use fstrings instead of %s + - If you need to open or write a file always set the encoding to utf8 + - If possible add an example in the docstring + - Just give the code, don't explain anything + + Called [[name]] that: + [[text]] + form_fields: + text: + multiline: true + - trigger: :class + form: | + Create a class with: + - type hints + - docstring using google style with line length less than 89 characters + - use docstrings on the class and each methods + - adding logging traces using the log variable log = logging.getLogger(__name__) + - Use fstrings instead of %s + - If you need to open or write a file always set the encoding to utf8 + - If possible add an example in the docstring + - Just give the code, don't explain anything + + Called [[name]] that: + [[text]] + form_fields: + text: + - trigger: :class + form: | + ... + - Use paragraphs to separate the AAA blocks and don't add comments like # Arrange or # Act or # Act/Assert or # Assert. So the test will only have black lines between sections + - In the Act section if the function to test returns a value always name that variable result. If the function to test doesn't return any value append an # act comment at the end of the line. + - If the test uses a pytest.raises there is no need to add the # act comment + - Don't use mocks + - Use fstrings instead of %s + - Gather all tests over the same function on a common class + - If you need to open or write a file always set the encoding to utf8 + - Just give the code, don't explain anything + + form_fields: + text: + - trigger: :polish + form: | + ... + - Add or update the docstring using google style on all classes, functions and methods + - Wrap the docstring lines so they are smaller than 89 characters + - All docstrings must start in the same line as the """ + - Add logging traces using the log variable log = logging.getLogger(__name__) + - Use f-strings instead of %s + - Just give the code, don't explain anything + form_fields: + code: + multiline: true + - trigger: :text + form: | + Polish the next text by: + + - Summarising each section without losing relevant data + - Tweak the markdown format + - Improve the wording + + [[text]] + form_fields: + text: + multiline: true + + - trigger: :readme + form: | + Create the README.md taking into account: + + - Use GPLv3 for the license + - Add Lyz as the author + - Add an installation section + - Add an usage section + + of: + [[text]] + + form_fields: + text: + multiline: true + ``` + feat(aleph#Get all documents of a collection): Get all documents of a collection + + `list_aleph_collection_documents.py` is a Python script designed to interact with an API to + retrieve and analyze documents from specified collections. It offers a command-line interface + (CLI) to list and check documents within a specified collection. + + **Features** + + - Retrieve documents from a specified collection. + - Analyze document processing statuses and warn if any are not marked as successful. + - Return a list of filenames from the retrieved documents. + - Supports verbose output for detailed logging. + - Environment variable support for API key management. + + **Installation** + + To install the required dependencies, use `pip`: + + ```bash + pip install typer requests + ``` + + Ensure you have Python 3.6 or higher installed. + + Create the file `list_aleph_collection_documents.py` with the next contents: + + ```python + import logging + import requests + from typing import List, Dict, Any, Optional + import logging + import typer + from typing import List, Dict, Any + + log = logging.getLogger(__name__) + app = typer.Typer() + + @app.command() + def get_documents( + collection_name: str = typer.Argument(...), + api_key: Optional[str] = typer.Option(None, envvar="API_KEY"), + base_url: str = typer.Option("https://your.aleph.org"), + verbose: bool = typer.Option( + False, "--verbose", "-v", help="Enable verbose output" + ), + ): + """CLI command to retrieve documents from a specified collection.""" + if verbose: + logging.basicConfig(level=logging.DEBUG) + log.debug("Verbose mode enabled.") + else: + logging.basicConfig(level=logging.INFO) + if api_key is None: + log.error( + "Please specify your api key either through the --api-key argument " + "or through the API_KEY environment variable" + ) + raise typer.Exit(code=1) + try: + documents = list_collection_documents(api_key, base_url, collection_name) + filenames = check_documents(documents) + if filenames: + print("\n".join(filenames)) + else: + log.warning("No documents found.") + except Exception as e: + log.error(f"Failed to retrieve documents: {e}") + raise typer.Exit(code=1) + + def list_collection_documents( + api_key: str, base_url: str, collection_name: str + ) -> List[Dict[str, Any]]: + """ + Retrieve documents from a specified collection using pagination. + + Args: + api_key (str): The API key for authentication. + base_url (str): The base URL of the API. + collection_name (str): The name of the collection to retrieve documents from. + + Returns: + List[Dict[str, Any]]: A list of documents from the specified collection. + + Example: + >>> docs = list_collection_documents("your_api_key", "https://api.example.com", "my_collection") + >>> print(len(docs)) + 1000 + """ + headers = { + "Authorization": f"ApiKey {api_key}", + "Accept": "application/json", + "Content-Type": "application/json", + } + + collections_url = f"{base_url}/api/2/collections" + documents_url = f"{base_url}/api/2/entities" + log.debug(f"Requesting collections list from {collections_url}") + collections = [] + params = {"limit": 300} + + while True: + response = requests.get(collections_url, headers=headers, params=params) + response.raise_for_status() + data = response.json() + collections.extend(data["results"]) + log.debug( + f"Fetched {len(data['results'])} collections, " + f"page {data['page']} of {data['pages']}" + ) + if not data["next"]: + break + params["offset"] = params.get("offset", 0) + data["limit"] + + collection_id = next( + (c["id"] for c in collections if c["label"] == collection_name), None + ) + if not collection_id: + log.error(f"Collection {collection_name} not found.") + return [] + + log.info(f"Found collection '{collection_name}' with ID {collection_id}") + + documents = [] + params = { + "q": "", + "filter:collection_id": collection_id, + "filter:schemata": "Document", + "limit": 300, + } + + while True: + log.debug(f"Requesting documents from collection {collection_id}") + response = requests.get(documents_url, headers=headers, params=params) + response.raise_for_status() + data = response.json() + documents.extend(data["results"]) + log.info( + f"Fetched {len(data['results'])} documents, " + f"page {data['page']} of {data['pages']}" + ) + if not data["next"]: + break + params["offset"] = params.get("offset", 0) + data["limit"] + + log.info(f"Retrieved {len(documents)} documents from collection {collection_name}") + + return documents + + def check_documents(documents: List[Dict[str, Any]]) -> List[str]: + """Analyze the processing status of documents and return a list of filenames. + + Args: + documents (List[Dict[str, Any]]): A list of documents in JSON format. + + Returns: + List[str]: A list of filenames from documents with a successful processing status. + + Raises: + None, but logs warnings if a document's processing status is not 'success'. + + Example: + >>> docs = [{"properties": {"processingStatus": ["success"], "fileName": ["file1.txt"]}}, + >>> {"properties": {"processingStatus": ["failed"], "fileName": ["file2.txt"]}}] + >>> filenames = check_documents(docs) + >>> print(filenames) + ['file1.txt'] + """ + filenames = [] + + for doc in documents: + status = doc.get("properties", {}).get("processingStatus")[0] + filename = doc.get("properties", {}).get("fileName")[0] + + if status != "success": + log.warning( + f"Document with filename {filename} has processing status: {status}" + ) + + if filename: + filenames.append(filename) + + log.debug(f"Collected filenames: {filenames}") + return filenames + + if __name__ == "__main__": + app() + ``` + + *Get your API key* + + By default, any Aleph search will return only public documents in responses to API requests. + + If you want to access documents which are not marked public, you will need to sign into the tool. This can be done through the use on an API key. The API key for any account can be found by clicking on the "Settings" menu item in the navigation menu. + + **Usage** + + You can run the script directly from the command line. Below are examples of usage: + + Retrieve and list documents from a collection: + + ```bash + python list_aleph_collection_documents.py --api-key "your-api-key" 'Name of your collection' + ``` + + Using an Environment Variable for the API Key + + This is better from a security perspective. + ```bash + export API_KEY=your_api_key + python list_aleph_collection_documents.py 'Name of your collection' + ``` + + Enabling Verbose Logging + + To enable detailed debug logs, use the `--verbose` or `-v` flag: + + ```bash + python list_aleph_collection_documents.py -v 'Name of your collection' + ``` + + Getting help + + ```bash + python list_aleph_collection_documents.py --help + ``` + +### [memorious](vim_tabs.md) + +* New: [Switch to the previous opened buffer.](vim_tabs.md#switch-to-the-previous-opened-buffer) + + Often the buffer that you want to edit is the buffer that you have just left. Vim provides a couple of convenient commands to switch back to the previous buffer. These are `` (or ``) and `:b#`. All of them are inconvenient so I use the next mapping: + + ```vim + nnoremap :b# + ``` + + +* New: [Troubleshoot Undefined global `vim` warning.](vim_lsp.md#undefined-global-`vim`-warning) + + Added to my lua/plugins directory: + + ```lua + { + "neovim/nvim-lspconfig", + opts = { + servers = { + lua_ls = { + settings = { + Lua = { + diagnostics = { + globals = { "vim" }, + }, + }, + }, + }, + }, + }, + }, + ``` + +* New: [Get the version of the packages installed by Packer.](vim_packer.md#get-the-version-of-the-packages-installed-by-packer) + + Go into the plugin directory `cd ~/.local/share/nvim/site/pack/packer/start/your-plugin` and check it with git + +* New: Introduce memorious. + + [Memorious](https://github.com/alephdata/memorious) is a light-weight web scraping toolkit. It supports scrapers that collect structured or un-structured data. This includes the following use cases: + + - Make crawlers modular and simple tasks re-usable + - Provide utility functions to do common tasks such as data storage, HTTP session management + - Integrate crawlers with the Aleph and FollowTheMoney ecosystem + + **References** + + - [Memorious](https://github.com/alephdata/memorious) + +### [Data orchestrators](gitea.md) + +* Correction: Update disable regular login with oauth. + + The last `signin_inner.tmpl` failed with the latest version. I've + uploaded the new working one. + +* New: Configure vim to work with markdown. + + Markdown specific plugins: + + - [mkdnflow](https://github.com/jakewvincent/mkdnflow.nvim) looks awesome. + +* New: [Enable folds.](vim_markdown.md#enable-folds) + + If you have set the `foldmethod` to `indent` by default you won't be able to use folds in markdown. + + To fix this you can create the next autocommand (in `lua/config/autocmds.lua` if you're using [lazyvim](lazyvim.md)). + + ```lua + vim.api.nvim_create_autocmd("FileType", { + pattern = "markdown", + callback = function() + vim.wo.foldmethod = "expr" + vim.wo.foldexpr = "v:lua.vim.treesitter.foldexpr()" + end, + }) + ``` + +* New: [Aligning tables in markdown.](vim_markdown.md#aligning-tables-in-markdown) + + In the past I used [Tabular](https://github.com/godlygeek/tabular) but it doesn't work with the latest neovim and the project didn't have any update in the last 5 years. + + A good way to achieve this [without installing any plugin is to](https://heitorpb.github.io/bla/format-tables-in-vim/): + + - select the table, including the header and footer lines (with shift V, for example). + - Prettify the table with `:!column -t -s '|' -o '|'` + + If you don't want to remember that command you can bind it to a key: + + ```lua + vim.keymap.set("v", "tf", "!column -t -s '|' -o '|'", { desc = "Format table" }) + ``` + + How the hell this works? + + - `shift V` switches to Visual mode linewise. This is to select all the lines of the table. + - `:` switches to Command line mode, to type commands. + - `!` specifies a filter command. This means we will send data to a command to modify it (or to filter) and replace the original lines. In this case we are in Visual mode, we defined the input text (the selected lines) and we will use an external command to modify the data. + - `column` is the filter command we are using, from the `util-linux` package. column’s purpose is to “columnate”. The `-t` flag tells column to use the Table mode. The `-s` flag specifies the delimiters in the input data (the default is whitespace). And the `-o` flag is to specify the output delimiter to use (we need that because the default is two whitespaces). + +* New: [Fix Server does not allow request for unadvertised object error.](gitea.md#fix-server-does-not-allow-request-for-unadvertised-object-error) + + Fetching the whole history with fetch-depth: 0 worked for us: + + ```yaml + - name: Checkout the codebase + uses: actions/checkout@v3 + with: + fetch-depth: 0 + ``` + +* New: Introduce data orchestrators. + + Data orchestration is the process of moving siloed data from multiple storage locations into a centralized repository where it can then be combined, cleaned, and enriched for activation. + + Data orchestrators are web applications that make this process easy. The most popular right now are: + + - Apache Airflow + - [Kestra](#kestra) + - Prefect + + There are several comparison pages: + + - [Geek Culture comparison](https://medium.com/geekculture/airflow-vs-prefect-vs-kestra-which-is-best-for-building-advanced-data-pipelines-40cfbddf9697) + - [Kestra's comparison to Airflow](https://kestra.io/vs/airflow) + - [Kestra's comparison to Prefect](https://kestra.io/vs/prefect) + + When looking at the return on investment when choosing an orchestration tool, there are several points to consider: + + - Time of installation/maintenance + - Time to write pipeline + - Time to execute (performance) + + **[Kestra](kestra.md)** + + Pros: + + - Easier to write pipelines + - Nice looking web UI + - It has a [terraform provider](https://kestra.io/docs/getting-started/terraform) + - [Prometheus and grafana integration](https://kestra.io/docs/how-to-guides/monitoring) + + Cons: + + - Built in Java, so extending it might be difficult + - [Plugins are made in Java](https://kestra.io/docs/developer-guide/plugins) + + Kestra offers a higher ROI globally compared to Airflow: + + - Installing Kestra is easier than Airflow; it doesn’t require Python dependencies, and it comes with a ready-to-use docker-compose file using few services and without the need to understand what’s an executor to run task in parallel. + - Creating pipelines with Kestra is simple, thanks to its syntax. You don’t need knowledge of a specific programming language because Kestra is designed to be agnostic. The declarative YAML design makes Kestra flows more readable compared to Airflow’s DAG equivalent, allowing developers to significantly reduce development time. + - In this benchmark, Kestra demonstrates better execution time than Airflow under any configuration setup. + +### [Scrapers](vim_plugin_development.md) + +* New: Introduce Debug Adapter Protocol. + + `nvim-dap`](https://github.com/mfussenegger/nvim-dap) implements a client for the [Debug Adapter Protocol](https://microsoft.github.io/debug-adapter-protocol/overview). This allows a client to control a debugger over a documented API. That allows us to control the debugger from inside neovim, being able to set breakpoints, evaluate runtime values of variables, and much more. + + `nvim-dap` is not configured for any language by default. You will need to set up a configuration for each language. For the configurations you will need adapters to run. + + I would suggest starting with 2 actions. Setting breakpoints and “running” the debugger. The debugger allows us to stop execution and look at the current state of the program. Setting breakpoints will allow us to stop execution and see what the current state is. + + ```lua + vim.api.nvim_set_keymap('n', 'b', [[:lua require"dap".toggle_breakpoint()]], { noremap = true }) + vim.api.nvim_set_keymap('n', 'c', [[:lua require"dap".continue()]], { noremap = true }) + vim.api.nvim_set_keymap('n', 'n', [[:lua require"dap".step_over()]], { noremap = true }) + vim.api.nvim_set_keymap('n', 'N', [[:lua require"dap".step_into()]], { noremap = true }) + vim.api.nvim_set_keymap('n', '', [[:lua require"osv".launch({port = 8086})]], { noremap = true }) + ``` + + Go to a line where a conditional or value is set and toggle a breakpoint. Then, we’ll start the debugger. If done correctly, you’ll see an arrow next to your line of code you set a breakpoint at. + + There is no UI with dap by default. You have a few options for UI [nvim-dap-ui](https://github.com/rcarriga/nvim-dap-ui) + + In the `dap` repl you can [use the next operations](https://github.com/mfussenegger/nvim-dap/blob/master/doc/dap.txt): + + - `.exit`: Closes the REPL + - `.c` or `.continue`: Same as |`dap.continue`| + - `.n` or `.next`: Same as |`dap.step_over`| + - `.into`: Same as |`dap.step_into`| + - `.into_target`: Same as |`dap.step_into{askForTargets=true}`| + - `.out`: Same as |`dap.step_out`| + - `.up`: Same as |`dap.up`| + - `.down`: Same as |`dap.down`| + - `.goto`: Same as |`dap.goto_`| + - `.scopes`: Prints the variables in the current s`cope`s + - `.threads`: Prints all t`hread`s + - `.frames`: Print the stack f`rame`s + - `.capabilities`: Print the capabilities of the debug a`dapte`r + - `.b` or `.back`: Same as |`dap.step_back`| + - `.rc` or `.reverse-continue`: Same as |`dap.reverse_continue`| + +* New: [Introduce nvim-dap-ui.](vim_dap.md#nvim-dap-ui) + + Install with packer: + + ```lua + use { "rcarriga/nvim-dap-ui", requires = {"mfussenegger/nvim-dap"} } + ``` + + It is highly recommended to use [`neodev.nvim`](https://github.com/folke/neodev.nvim) to enable type checking for `nvim-dap-ui` to get type checking, documentation and autocompletion for all API functions. + + ```lua + require("neodev").setup({ + library = { plugins = { "nvim-dap-ui" }, types = true }, + ... + }) + ``` + `nvim-dap-ui` is built on the idea of "elements". These elements are windows which provide different features. + + Elements are grouped into layouts which can be placed on any side of the screen. There can be any number of layouts, containing whichever elements desired. + + Elements can also be displayed temporarily in a floating window. + + Each element has a set of mappings for element-specific possible actions, detailed below for each element. The total set of actions/mappings and their default shortcuts are: + + - edit: `e` + - expand: `` or left click + - open: `o` + - remove: `d` + - repl: `r` + - toggle: `t` + + See `:h dapui.setup()` for configuration options and defaults. + + To get started simply call the setup method on startup, optionally providing custom settings. + + ```lua + require("dapui").setup() + ``` + + You can open, close and toggle the windows with corresponding functions: + + ```lua + require("dapui").open() + require("dapui").close() + require("dapui").toggle() + ``` + +* New: [Debug neovim plugins with DAP.](vim_dap.md#one-small-step-for-vimkind) + + `one-small-step-for-vimkind` is an adapter for the Neovim lua language. It allows you to debug any lua code running in a Neovim instance. + + Install it with Packer: + + ```lua + use 'mfussenegger/nvim-dap' + ``` + After installing one-small-step-for-vimkind, you will also need a DAP plugin which will allow you to interact with the adapter. Check the install instructions [here](#nvim-dap). + + Then add these lines to your config: + + ```lua + local dap = require"dap" + dap.configurations.lua = { + { + type = 'nlua', + request = 'attach', + name = "Attach to running Neovim instance", + } + } + + dap.adapters.nlua = function(callback, config) + callback({ type = 'server', host = config.host or "127.0.0.1", port = config.port or 8086 }) + end + ``` + +* New: [Debugging with DAP.](vim_plugin_development.md#debugging-with-dap) + + You can debug Lua code running in a separate Neovim instance with [jbyuki/one-small-step-for-vimkind](vim_dap.md#one-small-step-for-vimkind). + + The plugin uses the [Debug Adapter Protocol](vim_dap.md#debug-adapter-protocol). Connecting to a debug adapter requires a DAP client like [mfussenegger/nvim-dap](vim_dap.md#nvim-dap). Check how to configure [here](vim_dap.md#one-small-step-for-vimkind) + + Once you have all set up and assuming you're using the next keybindings for `nvim-dap`: + + ```lua + vim.api.nvim_set_keymap('n', 'b', [[:lua require"dap".toggle_breakpoint()]], { noremap = true }) + vim.api.nvim_set_keymap('n', 'c', [[:lua require"dap".continue()]], { noremap = true }) + vim.api.nvim_set_keymap('n', 'n', [[:lua require"dap".step_over()]], { noremap = true }) + vim.api.nvim_set_keymap('n', 'N', [[:lua require"dap".step_into()]], { noremap = true }) + vim.api.nvim_set_keymap('n', '', [[:lua require"osv".launch({port = 8086})]], { noremap = true }) + vim.api.nvim_set_keymap('n', 'B', [[:lua require"dapui".toggle()]], { noremap = true }) + ``` + + You will debug the plugin by: + + - Launch the server in the debuggee using `F5`. + - Open another Neovim instance with the source file (the debugger). + - Place breakpoint with `b`. + - On the debugger connect to the DAP client with `c`. + - Optionally open the `nvim-dap-ui` with `B` in the debugger. + - Run your script/plugin in the debuggee + - Interact in the debugger using `n` to step to the next step, and `N` to step into. Then use the dap console to inspect and change the values of the state. + + +* New: Introduce morph.io. + + [morph.io](https://morph.io/) is a web service that runs your scrapers for you. + + Write your scraper in the language you know and love, push your code to GitHub, and they take care of the boring bits. Things like running your scraper regularly, alerting you if there's a problem, storing your data, and making your data available for download or through a super-simple API. + + To sign in you'll need a GitHub account. This is where your scraper code is stored. + + The data is stored in an sqlite + + **Usage limits** + + Right now there are very few limits. They are trusting you that you won't abuse this. + + However, they do impose a couple of hard limits on running scrapers so they don't take up too many resources + + - max 512 MB memory + - max 24 hours run time for a single run + + If a scraper runs out of memory or runs too long it will get killed automatically. + + There's also a soft limit: + + - max 10,000 lines of log output + + If a scraper generates more than 10,000 lines of log output the scraper will continue running uninterrupted. You just won't see any more output than that. To avoid this happening simply print less stuff to the screen. + + Note that they are keeping track of the amount of cpu time (and a whole bunch of other metrics) that you and your scrapers are using. So, if they do find that you are using too much they reserve the right to kick you out. In reality first they'll ask you nicely to stop. + + **References** + + - [Docs](https://morph.io/documentation) + - [Home](https://morph.io/) + +### [Vim autosave](git.md) + +* Correction: Search for alternatives to git-sweep. + + The tool is [no longer maintained](https://github.com/arc90/git-sweep/issues/45) but there is still no good alternative. I've found some but are either not popular and/or not maintained: + + - [gitsweeper](https://github.com/petems/gitsweeper) + - [git-removed-brances](https://github.com/nemisj/git-removed-branches) + - [git-sweep rewrite in go](https://github.com/gottwald/git-sweep) + +* New: [Update all git submodules.](git.md#update-all-git-submodules) + + If it's the first time you check-out a repo you need to use `--init` first: + + ```bash + git submodule update --init --recursive + ``` + + To update to latest tips of remote branches use: + + ```bash + git submodule update --recursive --remote + ``` + +* New: Manually toggle the autosave function. + + Besides running auto-save at startup (if you have `enabled = true` in your config), you may as well: + + - `ASToggle`: toggle auto-save + + +### [Espanso](espanso.md) + +* New: Introduce espanso. + + [Espanso](https://github.com/espanso/espanso) is a cross-platform Text Expander written in Rust. + + A text expander is a program that detects when you type a specific keyword and replaces it with something else. This is useful in many ways: + + - Save a lot of typing, expanding common sentences or fixing common typos. + - Create system-wide code snippets. + - Execute custom scripts + - Use emojis like a pro. + + **[Installation](https://espanso.org/docs/install/linux/)** + Espanso ships with a .deb package, making the installation convenient on Debian-based systems. + + Start by downloading the package by running the following command inside a terminal: + + ```bash + wget https://github.com/federico-terzi/espanso/releases/download/v2.2.1/espanso-debian-x11-amd64.deb + ``` + + You can now install the package using: + + ```bash + sudo apt install ./espanso-debian-x11-amd64.deb + ``` + + From now on, you should have the `espanso` command available in the terminal (you can verify by running `espanso --version`). + + At this point, you are ready to use `espanso` by registering it first as a Systemd service and then starting it with: + + ```bash + espanso service register + ``` + + Start espanso + + ```bash + espanso start + ``` + + Espanso ships with very few built-in matches to give you the maximum flexibility, but you can expand its capabilities in two ways: creating your own custom matches or [installing packages](#using-packages). + + **[Configuration](https://espanso.org/docs/get-started/#configuration)** + + Your configuration lives at `~/.config/espanso`. A quick way to find the path of your configuration folder is by using the following command `espanso path`. + + - The files contained in the `match` directory define what Espanso should do. In other words, this is where you should specify all the custom snippets and actions (aka Matches). The `match/base.yml` file is where you might want to start adding your matches. + - The files contained in the `config` directory define how Espanso should perform its expansions. In other words, this is were you should specify all Espanso's parameters and options. The `config/default.yml` file defines the options that will be applied to all applications by default, unless an app-specific configuration is present for the current app. + + **[Using packages](https://espanso.org/docs/get-started/#understanding-packages)** + + Custom matches are great, but sometimes it can be tedious to define them for every common operation, especially when you want to share them with other people. + + Espanso offers an easy way to share and reuse matches with other people, packages. In fact, they are so important that Espanso includes a built-in package manager and a store, the [Espanso Hub](https://hub.espanso.org/). + + **[Installing a package](https://espanso.org/docs/get-started/#installing-a-package)** + + Get the id of the package from the [Espanso Hub](https://hub.espanso.org/) and then run `espanso install <>`. + + Of all the packages, I've found the next ones the most useful: + + - [typofixer-en](https://hub.espanso.org/typofixer-en) + - [typofixer-es](https://hub.espanso.org/typofixer-es) + - [misspell-en-uk](https://hub.espanso.org/misspell-en-uk) + + **Overwriting the snippets of a package** + + For example the `typofixer-en` replaces `si` to `is`, although `si` is a valid spanish word. To override the fix you can create your own file on `~/.config/espanso/match/typofix_overwrite.yml` with the next content: + + ```yaml + matches: + # Simple text replacement + - trigger: "si" + replace: "si" + ``` + + **[Creating a package](https://espanso.org/docs/packages/creating-a-package/)** + + **Auto-restart on config changes** + + Set `auto_restart: true` on `~/.config/espanso/config/default.yml`. + + **[Changing the search bar shortcut](https://espanso.org/docs/configuration/options/#customizing-the-search-bar)** + + If the default search bar shortcut conflicts with your i3 configuration set it with: + + ```yaml + search_shortcut: CTRL+SHIFT+e + ``` + + **[Hiding the notifications](https://espanso.org/docs/configuration/options/#hiding-the-notifications)** + + You can hide the notifications by adding the following option to your `$CONFIG/config/default.yml` config: + + ```yaml + show_notifications: false + ``` + + **Usage** + + Just type and you'll see the text expanded. + + You can use the search bar if you don't remember your snippets. + + **References** + - [Code](https://github.com/espanso/espanso) + - [Docs](https://espanso.org/docs/get-started/) + +* New: [Desktop application to add words easily.](espanso.md#desktop-application-to-add-words-easily) + + Going into the espanso config files to add words is cumbersome, to make things easier you can use the `espansadd` Python script. + + I'm going to assume that you have the following prerequisites: + + - A Linux distribution with i3 window manager installed. + - Python 3 installed. + - Espanso installed and configured. + - `ruyaml` and `tkinter` Python libraries installed. + - `notify-send` installed. + - Basic knowledge of editing configuration files in i3. + + **Installation** + + Create a new Python script named `espansadd.py` with the following content: + + ```python + import tkinter as tk + from tkinter import simpledialog + import traceback + import subprocess + import os + import sys + + from ruyaml import YAML + from ruyaml.scanner import ScannerError + + file_path = os.path.expanduser("~/.config/espanso/match/typofixer_overwrite.yml") + + def append_to_yaml(file_path: str, trigger: str, replace: str) -> None: + """Appends a new entry to the YAML file. + + Args:ath + file_path (str): The file to append the new entry. + trigger (str): The trigger string to be added. + replace (str): The replacement string to be added. + """ + + # Define the new snippet + new_entry = { + "trigger": trigger, + "replace": replace, + "propagate_case": True, + "word": True, + } + + # Load existing data or initialize an empty list + try: + with open(os.path.expanduser(file_path), "r") as f: + try: + data = YAML().load(f) + except ScannerError as e: + send_notification( + f"Error parsing yaml of configuration file {file_path}", + f"{e.problem_mark}: {e.problem}", + "critical", + ) + sys.exit(1) + except FileNotFoundError: + send_notification( + f"Error opening the espanso file {file_path}", urgency="critical" + ) + sys.exit(1) + + data["matches"].append(new_entry) + + # Write the updated data back to the file + with open(os.path.expanduser(file_path), "w+") as f: + yaml = YAML() + yaml.default_flow_style = False + yaml.dump(data, f) + + def send_notification(title: str, message: str = "", urgency: str = "normal") -> None: + """Send a desktop notification using notify-send. + + Args: + title (str): The title of the notification. + message (str): The message body of the notification. Defaults to an empty string. + urgency (str): The urgency level of the notification. Can be 'low', 'normal', or 'critical'. Defaults to 'normal'. + """ + subprocess.run(["notify-send", "-u", urgency, title, message]) + + def main() -> None: + """Main function to prompt user for input and append to the YAML file.""" + # Create the main Tkinter window (it won't be shown) + window = tk.Tk() + window.withdraw() # Hide the main window + + # Prompt the user for input + trigger = simpledialog.askstring("Espanso add input", "Enter trigger:") + replace = simpledialog.askstring("Espanso add input", "Enter replace:") + + # Check if both inputs were provided + try: + if trigger and replace: + append_to_yaml(file_path, trigger, replace) + send_notification("Espanso snippet added successfully") + else: + send_notification( + "Both trigger and replace are required", urgency="critical" + ) + except Exception as error: + error_message = "".join( + traceback.format_exception(None, error, error.__traceback__) + ) + send_notification( + "There was an unknown error adding the espanso entry", + error_message, + urgency="critical", + ) + + if __name__ == "__main__": + main() + ``` + + Ensure the script has executable permissions. Run the following command: + + ```bash + chmod +x espansadd.py + ``` + + To make the `espansadd` script easily accessible, we can configure a key binding in i3 to run the script. Open your i3 configuration file, typically located at `~/.config/i3/config` or `~/.i3/config`, and add the following lines: + + ```bash + bindsym $mod+Shift+e exec --no-startup-id /path/to/your/espansadd.py + ``` + + Replace `/path/to/your/espansadd.py` with the actual path to your script. + + If you also want the popup windows to be in floating mode add + + ```bash + for_window [title="Espanso add input"] floating enable + ``` + + After editing the configuration file, reload i3 to apply the changes. You can do this by pressing `Mod` + `Shift` + `R` (where `Mod` is typically the `Super` or `Windows` key) or by running the following command: + + ```bash + i3-msg reload + ``` + + **Usage** + + Now that everything is set up, you can use the `espansadd` script by pressing `Mod` + `Shift` + `E`. This will open a dialog where you can enter the trigger and replacement text for the new Espanso snippet. After entering the information and pressing Enter, a notification will appear confirming the snippet has been added, or showing an error message if something went wrong. + +## Generic Coding Practices + +### [Writing good documentation](documentation.md) + +* New: [Add diátaxis as documentation writing guideline.](documentation.md#references) + + [Diátaxis: A systematic approach to technical documentation authoring](https://diataxis.fr/) + +### [Conventional comments](conventional_comments.md) + +* New: Introduce conventional comments. + + [Conventional comments](https://conventionalcomments.org/) is the practice to use a specific format in the review comments to express your intent and tone more clearly. It's strongly inspired by [semantic versioning](semantic_versioning.md). + + Let's take the next comment: + + ``` + This is not worded correctly. + ``` + + Adding labels you can tell the difference on your intent: + + ``` + **suggestion:** This is not worded correctly. + ``` + Or + ``` + **issue (non-blocking):** This is not worded correctly. + ``` + + Labels also prompt the reviewer to give more **actionable** comments. + + ``` + **suggestion:** This is not worded correctly. + + Can we change this to match the wording of the marketing page? + ``` + + Labeling comments encourages collaboration and saves **hours** of undercommunication and misunderstandings. They are also parseable by machines! + + **Format** + + Adhering to a consistent format improves reader's expectations and machine readability. + Here's the format we propose: + ``` +