RegExes are designed to accept a regular language. Regular languages remain regular under the operations of union, intersection and complement.

RegExes support unions; if a and b are regexes for regular langauges A and B then (a|b) is the regular expression for the union of A and B. However, there is no such builtin support for intersections and complements. My question is: why not? Most modern RegExe engines use NFAs, so implementing intersections and complements should only be a bit more complicated than it would be if they used DFAs (in which case it would be almost trivial)

I think in your last sentence you said the opposite of what you meant. It's with NFAs that things get easier (as can be seen by the fact that every DFA is an NFA but not the other way around).

It's been too many decades since I studied automata theory, but I'm having trouble convincing myself that I can figure out how to make a finite state machine that accepts A⋂B. I guess for negation you just negate the accept status of every node. Oh and I guess if I have union and negation I can build ¬((¬A)∪(¬B)) can't I? Although I hope there's a better way...

I think the answer to your question is just that that's the way the history played out. People quickly found [^ABC] useful, so there's at least that much support for negation. But I guess people didn't see the need for intersections because we build our mental models of a pattern to mach incrementally: a capital letter followed by plus or minus, or a lower case vowel: ([aeiou]|([A-Z][-+]). Whereas I think to see a need for intersection, you have to be thinking globally, because it's not until the whole string is parsed that you know how it's doing. But except for the first sentence I don't feel confident about this paragraph.