What tech stack interests me as of September 2022?
Theo gives a good discussion on which front-end framework to pick.
SolidJS seems to improve on a lot of the mistakes and tech debt React is built on. Fireship and Jack Herrington give a pretty good overview. You can use JSX so you’ll feel at home if you’ve used React before, but you’ll also see improvements in terms of performance, reactivity, state management, and more.
For dependencies, I use pnpm to save on space and have a cleaner node_modules.
Bun can lead to faster server-side-rendered (SSR) React. From this Semaphore blog post: “Bun is intended to be a drop-in replacement for Node, Webpack, Babel, Yarn, and PostCSS — all in one neat package.”
Tailwind is what I’d use for styling. It differs from UI kits of yesteryear (e.g., Bootstrap) in that you get more flexibility and customizability. With Tailwind, you might not need a component library like Mantine; instead you could just create your own component library.
For consistency between React and React Native (mobile), Tamagui looks appealing.
For small projects, PocketBase looks like a convenient open-source NoSQL solution.
Graph databases are captivating for their ability to quickly traverse lots of edges / relationships. Neo4j, TigerGraph, and Dgraph are exciting options in this domain, as well as SurrealDB and EdgeDB.
Time-series and analytics
For analytics, see Rockset.
Messaging and streaming
In service-oriented architectures, you need to emit events and pass messages between services. Kafka is used here often, for its better performance over other message brokers like RabbitMQ. RedPanda is Kafka’s more performant twin sibling, but if you want a managed solution use Confluent Cloud.
Within the AWS ecosystem, there are a variety of messaging solutions you can use to build an Event-Driven Architecture.
Change Data Capture
“Change Data Capture” (CDC) is a pattern for tracking and reacting to changes in a database.
For small apps, you can reach for supabase/realtime.
Debezium is useful if you’ve committed or are already using Kafka.
Sending data down a pipe is great, but what if you want to transform or process the data whilst it’s in the pipe? Furthermore, what if you want to stitch multiple pipes together, creating increasingly complex pipelines?
ksqlDB is useful for asynchronously materializing views using SQL and querying them in an interactive fashion. It doesn’t replace something like Postgres or MongoDB as a primary storage system, nor does it have the rich query capabilities of an analytical store like Elasticsearch, Druid, or Snowflake. Its sweet spot is for event streaming applications that are gluing together multiple systems to get simple query capabilities.
Additionally, you can look to Materialize, which offers performance improvements over ksqlDB and Apache Flink, by avoiding RocksDB altogether and not “incurring cross-core data movement for every datum.”
Similarly, there is Decodable, which lets you build data pipelines from a myriad data sources, such as Amazon Kinesis, Confluent Cloud, Kafka, RedPanda, MySQL, Postgres, Pulsar, and more.
Meroxa also sits in this space.
What I like about these new tools/platforms, is they may obviate the need for additional microservices whose role has traditionally been to aggregate data and transform, enhance, or enrich it somehow.
If you are familiar with the SAGA pattern, you typically will develop a set of microservices that work together as a transaction workflow. You could potentially replace some of your microservices to be Decodable streaming pipelines.
Workflows and Orchestration
If you do write sagas and workflows that span across different microservices, check out Temporal.
It’s one of the only tools that lets you implement a distributed workflow in pure code, easily contrasted with something like AWS Step Functions. With Temporal, you code complex workflows where failure is abstracted away, and you get visibility into workflow status. Currently, you have to self-host Temporal clusters, but a cloud offering is coming soon.
The main difference between Apache Airflow and Temporal seems to be that the latter is code-first (i.e., you write workflows in code) as opposed to a DAG. However, I think Airflow supports writing workflows in Python now.
The data from your backend has to get to your frontend somehow.
The dev experience working with GraphQL on the client-side has been pretty positive. Auto-generating types helps massively.
For small projects that want a Dynamo-powered GraphQL API for free, go to Grafbase.
Hasura gives you a GraphQL API over your Postgres DB.
gRPC is great for service-to-service communication. The best part of this dev experience is using protobufs: they’re performant, strongly-typed, and self-documenting. The Buf ecosystem has made the experience of using protobufs orders of magnitude better, thanks to their tools around linting, formatting, and breaking-change detection; remote library generation; and their schema registry. They even built a better version of gRPC itself, with a much better replacement for grpc-web.
You may not need a traditional GraphQL layer.
With tRPC, clients can share the server’s types and invoke RPC (queries and mutations).
There’s no code generation either.
For presenting, I’m intrigued by SlideV.