Development Central

Bill Sorensen's software development blog

The future of ASP.NET Core

The new ASP.NET Core 2.0 packages can no longer be used on .NET Desktop

This issue on the aspnet GitHub site has caused some controversy. So what's the deal?

ASP.NET Core 2.0 (which is not yet released) is currently targeting only .NET Core. Unless this changes, it means that the new version will not support the .NET Framework.

Some background (because the names are really confusing):

.NET Core is a cross-platform version of .NET. It is not currently feature-complete compared to .NET Framework.

.NET Framework is the Windows-based .NET we're used to. It supports everything but the kitchen sink.

ASP.NET Core is (more or less) ASP.NET MVC 6 + ASP.NET Web API with OWIN support. The latter is particularly nice for integration testing. Despite the "Core" name, it can (could) target .NET Framework or .NET Core.

This is simplified and not exact, but it's close enough IMHO.

Right now you can create a web site using the current version of ASP.NET Core (1.1) and target .NET Framework 4.x, reference existing assemblies, and deploy to IIS.

According to .NET Core Support Policy (which also covers ASP.NET Core per the FAQ), 1.1 support could end 12 months after the 2.0 release (if I'm reading this right). 2.0 may be out this month. The implication is that 1.1 support may end mid-2018 (see Scott Hanselman's comment).

So let's say you're an enterprise developer, and you need to replace a legacy ASP.NET Web Forms site. The site uses a lot of in-house and third-party DLLs, integrating with SOAP services and COTS software. What technology do you use?

A week ago I recommended ASP.NET Core 1.1 targeting the .NET Framework for just this scenario. Now I'm finding that was an unwise decision. In a year, we could end up with a site built on an unsupported technology, with potentially no migration path. All it takes is one feature that .NET Core doesn't support.

Currently, better options include ASP.NET MVC 5 and the excellent Nancy framework (which does support OWIN).

I believe this will hurt adoption of ASP.NET Core, particularly in the enterprise. My hope is that the team will reconsider, as none of this is final.

[Update: Support for ASP.NET Core 1.1 on .NET Framework will be extended until July 2019 at least. See Damian Edwards' comment.]

The many costs of component libraries - a further consideration

 

(Guest post by Bob Dawson)

Bob has held titles ranging from Armor Officer to Director of Bioinformatics, but he still thinks of himself primarily as a software designer and developer. His interests include world travel, history, narrative theory, and software architecture.

I've had the privilege of working with Bill, and fully appreciate the circumstances that led to this frustration about visual code libraries. Nevertheless, user interface component libraries may be a 'baby and bathwater' issue where cautions for how to survive in the middle of the road are more helpful than making it sound like everyone should stay off the road entirely as too dangerous. Visual component suites do indeed have costs far beyond the initial sticker price. But of course, acquisition cost not being the full story is true of all code. The additional costs and risks must be appreciated and managed.

All that said, I still think there's still a strong case for them, for both intrinsic and extrinsic reasons. First. Bill's quite right in saying that no 3rd party provider is likely to command the depth of testing facilities and user base that an OS vendor enjoys. But to me that's not really the full story; they certainly have more than the end-point development operations they in turn support, whether those are commercial code operations or internal business software developers. So I think that the build-or-buy equation still tips dramatically in the direction of buying, not writing, common interface controls. There's simply no way, as an endpoint shop, that I'd want to compete with the economies of scale, testing base, or degree of focus that a visual component library vendor brings.

Then too, I'd point out that, at least for Windows shops, there isn't really a competitive library actually included in the OS. Historically, Microsoft-supplied visual components haven't just belonged to the OS team, and weren't in-the-box at the OS level. Rather, MSWord and Office promulgated new components with every release for many years—so those Microsoft applications constituted the actual competition and 'effective standard' for 3rd party applications. That's why 3rd party library vendors always tended to advertise the 'Office look and feel' capabilities rather than simply being better than vanilla Windows controls. In the face of that reality, any shop that doesn't use external libs, or doesn't undertake the potentially massive task of writing its own, is starting very much behind user expectations.

That being the case, however, it remains that Bill's right about the potential costs and risks of such external dependencies, so let's list out some guidelines for risk management:

1. Have an official 'formulary' and publish and enforce it. The introduction of new systemic dependencies on the business's technical stack should not be taken lightly, nor without the full knowledge of the entire dev organization. Having dependencies on dozens of one-offs that no one even knows are already available is worse than no library at all, and not getting the entire development community involved in library selection is like producing an application without getting input from the complete user population—because that's what it is. Dependency selection is an organization-wide issue that cannot be delegated out to individual teams.

2. Never buy or even consider libraries without source code. If you can't compile it and test it, it's about as useful (and dangerous) as that app that what-was-his-name wrote back when he worked here in some language that seemed cool at the time. Sourceless code is toxic no matter how it's acquired.

3. For the same reason, component libraries belong in your source code control system and your build cycle (see 2). Compiling it today doesn't mean it will still work on the next version of your language/stack/VM/OS mix etc. Dependencies are intrinsically dangerous—uncontrolled dependencies are much worse.

4. Plan for that extra upgrade cycle delay that Bill mentioned—you can't just move to next OS or tool chain version, you have to wait on the component vendor to move before even considering your own migration.

5. Plan for and assign a support specialist. Just like you wouldn't use a database without wanting someone who can handle the DBA tasks, you don't want to promulgate a visual component library without immediately available expertise. Even vendors who offer on-call technical support often encourage shops to appoint a specific technical contact for support questions, and they're right. Your team ought to have someone who can be the internal help desk for library capabilities, good practices, and limitations. Your library starts dragging on productivity if "don't do X" becomes something that every using developer must to learn individually through bad experience. Lack of a component expert becomes an organizational learning disability.

6. Make component use a specific code review question—is the component really necessary? Is it being used as designed and appropriately?

While the latter question is relatively straightforward--something your internal library advocate/authority should be able to answer--the question of need demands some additional attention. This is not simply a question of YAGNI or user delight, but of basic architecture. Visual components and libraries fundamentally compete on power—but does all that power really belong in the UI? Or rather (in the case of a sortable grid for example) shouldn't your business layer rather present a non-visual technical collection that accepts sorting and filtering plug-ins? In the worst case, drop-in library use can mimic the worst antipatterns of the old RAD, single layer, all-business-logic-in-the-UI approach. Will the business logic ever be migrated across platforms or deployment patterns? Does a crucial piece of business logic get missed because another application uses a different component, or even the same component configured slightly differently? Is business logic buried in some visual component event handler rather than being where it belongs, in the object and operations model?

A "does everything" grid can rapidly become a fatal constraint if, by its very flexibility, it blinds the programmer to use cases that s/he should have considered and made explicit, and explicitly supported, in the business object model. As a result, the business ends up with apps supporting workflows that were never anticipated or documented, aren't known to the developer team, and that may not be safe or maintainable. At that point, the development organization can be faced with systems that will be immensely hard to refactor because the developers don't have an actual handle on how the current system really works or needs to work. Their sense of the application logic has been drastically reduced. Logic and constraints are just there in the components, and don't ever get surfaced or consciously addressed. "I didn't know the app was doing that" and "I can't find how it does it" are poor opening positions when changes are needed, but a rich UI component library can make either statement very easy to come to. So while not needing to model or support how users actually work can undoubtedly save time if you're lucky, it's also a potentially fatal mistake: the conceptual and object models of the business are at best incomplete, and at worst completely wrong. Complex UI components are powerful, but as with any pixie dust, their implications for total code maintainability may be wide, deep, and, most dangerous of all, hidden.

All that said, I still see UI libraries as tremendous tools, but my approach is cautious. Ideally, an application should be able to satisfy all known requirements or use-cases without them. You should be confident that the problem domain is understood and adequately modeled. In a way, I'm advocating using them primarily as last minute substitutions to cover unknown border cases that, in a complex and changing business, can't realistically be anticipated, and under the banner of YAGNI, perhaps shouldn't be worried about until they emerge in actual practice.

Visual component libraries are a layer of delight and amazement. And more than being just eye-candy, they can in many cases function as safety nets for unanticipated needs. But no matter how fast and easy it might seem, they shouldn't be allowed to replace an organization's main business code, the central business logic on which the organization depends.

bobD

The many costs of component libraries

You've all seen them. They're advertised on the covers of trade magazines and on your favorite tech podcasts. Maybe you've won an Ultimate Edition (valued at $2000!) at a users' group. They're component libraries. Typically these will be collections of user interface components, although some may be non-visual. They've been around for decades, and cover a variety of languages and platforms. And in my opinion, the costs of adopting one often outweigh the benefits.

The benefits are obvious. Would your users like a fancy grid? Does your platform (HTML, .NET, etc.) not provide one, or is the stock version too basic? Just drop this component on your page and go!

The costs, though, are more subtle.

Licensing

Let's say the cost is $1500. That's typically per developer. If you think you can get away with buying fewer licenses than you have developers, think again. This tends to result in "only Jane can build project Y," which is a toxic state. You also probably want a consistent look-and-feel across your portfolio.

Perhaps that cost is justified. A year later, you want to move to the new version of Visual Studio (for instance). Unsurprisingly, your component library does not support this. The new version of the library does, and upgrading is only $750 per developer...

Upgrading

You decide to pay the upgrade costs. Now someone has to actually upgrade existing applications. There may be (and often are) breaking changes. These may be minor, or they may require re-architecting applications. In some cases you need to run a vendor-provided tool over the source code of every application.

The new version may introduce bugs. Do you have a comprehensive suite of automated UI tests for all of your applications?

In any event, someone needs to update, test, merge, and possibly release every application that uses the components. I've seen this take days.

The worst case is when an internal library or package references the third-party components. (Don't do this!) That means updating the library, redeploying it, and then updating every application that depends on the internal library.

Supporting

If you choose not to upgrade, eventually your version of the component library will not be supported. It's possible that platform changes will break that version.

Even if you stay current, there are hidden support costs. I remember searching for a way to implement a particular feature in .NET WinForms. I found a simple solution (probably on Stack Overflow) for the Microsoft component. It didn't work. I finally found a post on the vendor forums of the component library we used on how this was not supported.

Microsoft components (for example) may be used by millions of developers. The community is huge. Most vendors can't match this.

Also consider developer experience. You can easily hire developers skilled in HTML5, ASP.NET MVC, WinForms, WPF, etc. It's likely that a new hire will have no experience in whatever third-party libraries you use, which may mean training and/or time costs.

If the vendor goes out of business, do you have the source code? Can you maintain it?

Final thoughts

Proper requirements gathering may reduce the perceived need for component libraries. For instance, a grid with user-definable columns may not be necessary if you know what columns your users require.

Component libraries often result in vendor lock-in. Caveat emptor.

AngularJS - is it worth it?

Over the last year and a half, I've been doing a lot of work on a single-page app (SPA) built with AngularJS (a.k.a. Angular 1). I didn't write the initial version, but I was part of a team developing new features for it.

Caveat: I don't consider myself an AngularJS expert.

That said, I plan to steer clear of AngularJS on my next project.

Why not AngularJS?

1. Testing.

There aren't a lot of options for doing end-to-end testing with AngularJS. The official recommendation is Protractor, which is built on Selenium. Protractor has its own learning curve, particularly in the way it hides asynchronous calls with "magic." The documentation recommends against using it with PhantomJS, so it's relatively slow.

We disabled all of our Protractor tests. Several team members (I was one) spent multiple days over the course of weeks trying to get the tests to run reliably on our build server. We gave up in frustration.

We switched to unit tests (Jasmine + Karma). These work reliably with PhantomJS. They aren't easy to write, though. Every component requires a different type of test. Want to test a controller? Look up how to do it. A directive? That's different. A filter? Different still. A service? Different. A directive with a template? Different - install and configure a library to handle the template cache.

http://angulartestingquickstart.com/ is helpful for untangling the complexity.

Recently Nightwatch.js appeared on the scene; I have heard it is possible to test AngularJS with this powerful tool.

2. Framework.

AngularJS is a framework, not a library. It attempts to provide solutions to nearly every aspect of building a web application. One cost of this approach is complexity. Tutorials may give developers the feeling that AngularJS is simple; it's not. Even after using it daily for months, I learned important facts on a daily basis.

Some aspects are needlessly complex. For example, nearly every time I use ngOptions I have to look up the syntax. One would think that creating a drop-down list from an array of objects and binding it to an identifier would be a common use case.

I distrust frameworks in general; they tend to be less flexible than building an application using focused libraries. If a God Class violates the Single Responsibility Principle, doesn't a framework suffer from the same issues?

3. Documentation.

I referenced the AngularJS documentation frequently during development. Then I'd go out on Stack Overflow or Google and try to find a clear explanation. The official documentation appears to have been written as a technical reference, and I personally find it difficult to follow.

4. Fragile.

I lost count of the number of times I forgot that myName becomes my-name in AngularJS. Except with filters.

Moving markup that worked perfectly into a template on a directive caused display issues that we never did resolve.

There are a number of other "gotchas" in the AngularJS world. Mistakes (such as typos) tend to fail silently. Part of this is the nature of JavaScript and dynamic languages in general, but it doesn't make things any less painful.

Here's one that took some time to track down: Angular $http calling success on 404

5. Short-lived.

Angular 2 is here. Much of what I've learned with Angular 1 will be obsolete eventually. How much study time do I want to spend on this? How long will it be around?

If you want to use AngularJS...

Follow the AngularJS style guide by John Papa. The guide is endorsed by the Angular team. If we had started with this, development would have been much less painful.

Avoid $rootScope whenever possible. Think of it like using global variables. Leverage services instead.

Use UI-Router. Don't even start with the AngularJS router. It will paint you into a corner of workarounds and hacks. This article opened my eyes.

What should I use instead?

I don't know. I like React's philosophy, but I'm still a beginner with that. I haven't tried other SPA frameworks (including Angular 2). Consider if you really need a SPA; would ASP.NET MVC plus a bit of Knockout do the job?

Whatever you choose, look for simplicity, testability, and clear documentation. Don't be sucked in by "look how fast you can build a to-do list!" samples.

Knockout.js learning tips

It's been awhile. I'm doing web development at my new job, so this and future posts may focus on that.

I used Knockout.js recently, and I'll share a few tips that I learned the hard way.

1. Watch the parentheses.

Remember that ko.observable objects are functions. If you're binding to the property and nothing else, you can omit the parentheses. If an expression is involved, you'll generally need them. If in doubt, include them.

The easiest way to avoid the need for parentheses is to put as much logic as possible in the view model. This minimizes expressions in the markup.

2. Be careful mixing server-side and client-side code.

This was on an ASP.NET MVC site, and we had both view engine markup and Knockout bindings originally. This proved difficult to reason about. While it's definitely possible, remember that Knockout is only going to see the page once it's rendered client-side.

3. Avoid comment (containerless) Knockout bindings.

I found that these did not seem to play well with templates, and they may not work with IE8.

4. Don't mix if bindings with other bindings.

I combined an if and a text binding in the same element. This resulted in an error of "You cannot use these bindings together on the same element." One solution is to use another span or div.

5. Don't mix if bindings with CSS classes or other markup.

In general, use if bindings with a div that has no classes, etc. The issue is when the binding is falsy, the element will still render - it'll just be empty. Styles can cause undesired visual artifacts.

6. Be cautious if mixing jQuery and Knockout.

We were using jQuery to wire up form submission to a class that was in a Knockout foreach. It wasn't working. The fix was to switch to a Knockout submit binding. Knockout can work fine with jQuery in most cases, though.

7. Remember what binding context you're in.

Especially with foreach, it's easy to forget and bind to the current item when you meant to bind to $parent or $root. The Knockoutjs context debugger (search the Chrome store) can help.

8. Don't try to do progressive enhancement.

If the client doesn't have JavaScript enabled, skip the whole Knockout section. See http://stackoverflow.com/questions/8961073/progressive-enhancement-with-knockoutjs.

9. Encode where appropriate.

It appears that attr bindings don't encode anything (although the text binding does). This is particularly relevant when binding to the href attribute of an anchor.

10. Use foreach on the parent element.

The documentation is clear on this, but it's easy to misread. The result of binding to a child is typically missing closing tags.

I like Knockout, and the learning curve isn't very steep. Keep it simple and it seems to work well.

Notes: Case Study: Algorithmic Trading in a Production Haskell Environment

Notes on a talk by John Lato of Tsuru Capital at Iowa Code Camp.

Coursera - financial modeling, etc. courses are available

Derivatives market is huge. (A quadrillion dollars!)

Create a model for the prices structure and derive a formula from the model.
Example: Black-Scholes Option Model - statistical volatility is the main variable, along with underlying price and time to maturity
Delta - underlying, Vega - volatility, Theta - time (The Greeks) (actually equations)
Wikipedia & Investipedia have articles on this.
Modeling is very math-y.

Algo Trading: Reality

  • Buggy 3rd party code
  • Out of spec implementations
  • Parsing

(Sound familiar?)
Can measure success: Are we making money?

1-2 GB/day for a small subset of the orderbook from the MarketFeed.

Knight Capital SEC article - how software deployment issues can cost your business money.

How Tsuru uses Haskell

  • Type system
  • Concurrency model
  • Foreign interface
  • Meta-programming


The Haskell Kool-Aid: If it compiles, it works! (Mostly true.)

Newtypes are "free," apply liberally. (Similar to units of measure in F#.)
  This way you don't apply the wrong function to a price, etc.
  Way to preserve invariants (can include constraints).

Coming to a strongly-typed functional language from a dynamic language is hard.
You have data, and you have functions. Model your data accurately.
If you have a Maybe Int, you can only use it with things that work with Maybe.
Rather than throwing exceptions, you create types that preserve invariants.

Parametric Polymorphism
Functions that work with, for instance, a list of any type
Can't alter your data, as they have no interface to interact with it
Similar to generics in C#/Java
Example: many :: Parser a -> Parser [a]
  (a is generic)
You can do a lot of this stuff in C# now, but it gets very verbose.
Type inference may infer a type more general than you thought, in which case you get code for free! (In functional languages.)
(In Haskell, it's considered good practice to write type signatures, at least for top-level functions.)

Concurrency
Immutable-by-default and lack of side effects are a big win
Some challenges remain (e.g. no mechanism for thread priority)
In F#, considered good practice to make a method pure unless good reason
Side effects are "Spooky action at a distance"
In Haskell, if something doesn't have IO (monad) in the signature, it doesn't do I/O.
Similar functionality to async/await. In functional languages, typically the syntax is very clean.

hackage.haskell.org - lots of code examples
ghc - compiler most people use - offers REPL, includes mem/perf profiler
Cabal - build system
learnyouahaskell.com
fpcomplete.com
ideone.com

github.com/johnLato

In Haskell, deep debugging can be hard. Traditional stack traces are tricky.

Testing
Have both unit tests and functional tests. (HUnit)
QuickCheck - Haskell (and more?) testing tool - generates random input
Checks results - good for catching corner cases
Also use a simulator in place of exchange, playing back feeds, etc.

Notes: Learn Every Programming Language

Notes on a talk by Patrick Delancy at Iowa Code Camp.

If you don't practice it, you lose it.

Why

The inability to learn programming languages could force you to change careers.
The idioms and concepts you learn will help you with your current language.

How

Start with the underlying principles. Understand the paradigms. The rest is just syntax.

Programming Paradigms (in order of complexity, low to high)

  1. Procedural
  2. Logical
  3. Functional
  4. Object Oriented

That's not the order in which we typically learn them.

Thinking Procedurally

  • Loops
  • Procedures/Functions/Subroutines
  • Global/Static Variables
  • Jump/Goto
  • Lexical Scoping
    • Text in your code defines variable scope
    • Not based on time (such as global after you define it)
    • Block scoping, function scoping, etc.

Thinking Objectively

  • Class/Entity
  • Abstraction
    • Making something simpler.
    • We don't have to understand how it works.
  • Encapsulation
    • An object, if it exists, should never be in an invalid state.
  • Inheritance
  • Polymorphism

Very domain-driven. Easy to translate the business needs.

Thinking Functionally

  • Functions as values
    • Lambdas, anonymous delegates
  • Pattern Matching
    • Regular expressions
    • Define shape of a value or object
    • Compiler can warn you of missing cases
  • Composition
    • Chaining function calls
  • Partial Application
  • Monad/Computation Expression
    • Binding functions together and defining how they behave
    • Helps to solve the problem of side effects (like I/O)
  • Closures
    • A function plus its referencing environment
    • Such as the memory address of a variable
    • Necessary evil (avoids globals)
  • Deconstruction
    • Pull out variables from object (often through pattern matching)
    • Lists are head + tail
  • Recursion
    • Tail recursion avoids running out of stack
  • Cons/Car/Cdr
    • Cons constructs a new value from two inputs
    • Linked lists
  • Etc.

What is the shape of the process?
Focus on behavior, not data.

Thinking Logically

  • Facts
  • Relationships
  • Goal-reduction

Classification
Languages don't fit into these buckets anymore.
They steal ideas from other paradigms.

Notes: Agile's Dirty Secret

Notes on a talk by Tim Gifford at Iowa Code Camp.

CelebrityAgilist.com - GitHub community

Agile Manifesto - no processes are listed.
See also the related 12 principles.

SAFe Agile process (new RUP?) - speaker has seen companies turn the PSI's from that into little waterfalls - easy to lose track of what the customer values

Incremental - you don't see the whole picture until everything is done
Iterative - see the picture, but it starts out fuzzy
(Like interleaved graphics downloading)

User Story Maps - read more on this!

Weinberg's Law of Raspberry Jam - the wider you spread it, the thinner it gets (poor understanding)

Testing Boundaries problem
3 code paths with 10 paths each - need 1,000 end-to-end tests
If we get a defect in one code path, 100 tests fail.
Spend all your time fixing broken tests.
10/10/10 unit tests + 200 integration tests + 1 end-to-end = 230 tests
Only 22 or so fail if there's a defect.
(Same combinatorial problem Mark Seemann talks about on Pluralsight.)

Large problems need leadership & courage.

Large Teams
More people => fewer questions => less learning
Interpersonal issues - why not split them up on clique boundaries?
Responsibility is spread out.

Small Feature Teams
Top-to-bottom, delivering customer value.
More opportunities for leadership roles.

He's using Feature Toggles.
He's very much in favor of it, but there's a downside.
You need to make sure you're truly releasing the software, not just delivering with features turned off.

It's important to still have a release plan.

Give pilot teams learning time - an extra point per story, etc.
You don't start out with TDD/BDD/ATDD at full velocity.

You cannot teach craftsmanship. Knowledge does not change behavior.
(There are marriage counselors who are divorced.)

"It's always a people problem" - Weinberg's 2nd Law of Consulting
(Secrets of Consulting is Weinberg's book)

Drowning in Defects
Agile practices can help with new defects - but what about legacy?
Defect age ranged from 2 hours to 2 years on one of his projects.
The variability resulted in angry customers.
What if we do defects in the order in which they were received? FIFO?
What about the old ones? Ignore them - if they're important, they'll be reported again.
Prioritization can be waste.
Capacity needs to align with demand - the business needs to step in if the bug requests are overwhelming the developers (more hires or stop reporting bugs until developers catch up).
Leadership doesn't come from authority. If you see a problem, fix it.
Why do we let defects accumulate?

In order to run fast, you need to run clean.

Don't start coding until you have a Given-When-Then scenario.

Notes: Becoming an Outlier

Notes on a talk by Cory House (@housecor) at Iowa Code Camp.

Career Reboot for the Developer Mind

What would you do if money were not a concern?
Why aren't you doing that?
If you choose money over happiness, you're working to make money so that you can live and work doing something you dislike.

Job, career, or calling?

Get up every day excited about what you're going to do.
Work "with" a company rather than "for" them.
Do it because you're having fun!

How to be an outlier;
Spend your time consistently doing things that people care about.

We're all weird. Some people will always look at you with pity.

Step 1: Be Modal
40% of your day is habitual.
Takes 10,000 hours for expertise (of deliberate practice)
You under-use your free time.

Multithread your life.
Hack the commute. (Podcasts, audiobooks, etc.)
Work from home.
Take public transit.
Read or watch tech videos while doing cardio.
Listen to podcasts while doing yard work.

Rethink giving back.
Contribute to open source, pro bono development, Give Camp, etc.
(Rather than building a house, donate your most valuable skill.)

The Maker's Schedule
Managers live with regular meetings, interruptions, etc.
Developers need uninterrupted thought to get into Flow.

Alter your schedule - go in early, etc.
Don't even open your email for an hour.
Larger, less frequent meetings

Improve your Signal to Noise
The cheaper you can live, the greater your options.

Automate relentlessly.
Direct deposits, auto bill pays, reminders, etc.
RescueTime - software tool

Target media consumption.
Recognize what you're giving up by watching reality TV.
InfoQ, TekPub, Pluralsight, TED talks, DNRTV
Manicure your stream. Slow media is for the focused few.
The less news you consume, the bigger the advantage you have.
Why waste your time on something you cannot control?
Focus on what you can influence.

The more you make per hour, the more time you can buy.
Use delegates. Hire someone to do the yardwork, etc.

Step 2: Manage Your Image
Pick Two - Neil Gaiman graduation speech on YouTube

Purpose built - how do you want people to look at you?
Self-image is your greatest constraint.
Can I see your body of work?
Do it in public - StackOverflow, GitHub, Twitter, blog, etc.
Your words are wasted if you're not blogging.
Measure: GoogleAlerts, TweetBeep, etc.
Shelved credentials - books attract alpha geeks and convey passion.

Step 3: Own Your Trajectory
(Of your career.)
What if every piece of your workday moved you toward independence?
(Example: Pluralsight)

Search for Scale

  1. Work
  2. Lead (talk about work)
  3. Own (products, frameworks, author)


Time = potential knowledge
Beware of becoming an assembly line coder.

Opportunity wedge
Life is characterized by closing horizons and lost opportunities.
You can extend this by enhancing your skills.

Don't work for money.  Work to learn.

Key to success: Learn to teach yourself more efficiently than any institution can.

Luck Surface Area - do what you're passionate about and communicate that to as many people as possible
Speaking, blogging, conferences, user groups, Tweet, never eat alone
Make the Hang - spend time with people you look up to
You are the average of the five people you spend the most time with. - Jim Rohn

Lean into Uncertainty
Find what scares you and run at it.
Speak at a conference.

You Don't Need Permission.

If it's work, we try to do less.
If it's art, we try to do more.

Notes: Pragmatic Architecture in .NET

Notes on a talk by Cory House (@housecor) at Iowa Code Camp.

Clean Code video on Pluralsight.

Religion is the problem in architecture. Always/never.
Look at the context.
It depends.
People who are experts answer questions with more questions.

Over-architecting wastes money.

Timelines

What if one month late is worthless?

Parkinson's Law: Work expands to fill available time.

Consider Complexity (speaker considers the following complex)

  • Automated testing
  • Coding to an interface
  • SOA
  • Rich Domain Model (DDD)
  • ORM vs. custom DAL
  • Repository Pattern
  • Layered architecture


Consider Simplicity

  • Do the simplest thing that could possibly work
  • Lean/Agile principles (minimize WIP, etc.)
  • YAGNI
  • 80/20 Rule


MVP
Book: Lean Startup - minimal viable product (MVP)
Scalability, maintenance costs, performance, etc. may not matter

We're paid for solutions, not code.

Flexing features (scope) is a lot better way to go than flexing quality.

Is it worth taking on technical debt today?
Hard vs. Soft deadlines

  • Trade show?
  • First to market?

Single loud customer, salesman misspoke, wild-ass guess, MS Project said so - terrible reasons for hard deadline

Technical
Layers - logical (separation of concerns)
Tiers - physical (often decided for you)

For a small enough application, methods are layers.

Everything's a trade-off.

Architectural Levels

  1. Simplest thing
  2. Somewhere in between
  3. Every tool in the shed


Active Record pattern mixes Domain and Data. Breaks SRP.
Easy to understand, though. Consider for CRUD apps or simple domain.
Rigid: Domain model = DB
Leads to God object
Hard to test
Converting to Repository Pattern = pain

Eric Evans - DDD book
Great for complicated, long-lived applications
Takes time

Level 2 (somewhere in between)
Focus on the pain

No Free Lunch

At what point does our application complexity grow to the point where the effort to enhance gets too painful?

MVP, junior team, simple domain, tight timeline, throwaway - consider Level 1

Flagship product, senior team, complex domain, long-term, security matters, flexible timeline - consider Level 3

POEAA (Fowler) and Dino Esposito's architecture book inspired this talk.
Microsoft .NET: Architecting Applications for the Enterprise is the latter.

Speaker recommends clean, readable code regardless.