Freelancers using JavaScript in Ashburn
Freelancers using JavaScript in Ashburn
Sign Up
Post a job
Sign Up
Log In
Filters
2
Projects
People
Cain A.
Fairfax, USA
I build modern, responsive websites that stand out.
Follow
Message
I build modern, responsive websites that stand out.
0
MatrixWell Collective - Tech Agency Brand Kit
0
4
0
Apex Properties: Luxury Real Estate Platform Concept
0
0
0
ReelRoom: Cinematic Theater Website Demo
0
1
0
Mystic Haven E-commerce Interface Design Demo
0
1
JavaScript
(11)
Follow
Message
Dariush Samari
Rockville, USA
Web Dev & AI Evaluator: Clear, Creative, Reliable Work
Follow
Message
Web Dev & AI Evaluator: Clear, Creative, Reliable Work
0
Modern Landing Page Template
0
8
1
3D Electric Field Visualization Project
1
10
0
Doppler Effect Simulation with Three.js
0
6
0
Dash Samari's Resume Website Development
0
3
JavaScript
(3)
Follow
Message
Conor M
Washington, USA
Design partner for early-stage startups 🚀
$5k+
Earned
2x
Hired
5.0
Rating
103
Followers
Follow
Message
Design partner for early-stage startups 🚀
3
Responsive Webflow Animation Project
3
11
2
Designing a Brand and Multi-Page Website for a Consultancy
2
9
5
Kickstarter needed landing pages for their new Partner Program—making it easy for creators to understand the program, compare partners, and take action. What we delivered: Partner Program landing page with clear hero, featured partners, and "Become a Partner" CTA Partner Directory with scannable cards, filters by service type/region/specialty Modular Figma system they can ship now and extend later Our process: Low-fi wireframes → high-fidelity with branding → final polish with developer annotations. No guesswork. In 3 weeks: wireframes (desktop, tablet, mobile), high-fidelity designs, reusable component library, and a development-ready file. "Conor was a great partner—very well organized, hit fast deadlines, guided us with the right questions and delivered solid work. Would definitely work with him again."
5
125
1
Website Design and Webflow Development for Review Harvest
1
11
JavaScript
(2)
Follow
Message
Caleb Lee
Silver Spring, USA
AI wizard | Vibe Coder
$1k+
Earned
2x
Hired
8
Followers
Follow
Message
AI wizard | Vibe Coder
1
On Purpose Events Dashboard Development
1
1
1
AI-Powered Workflow Platform Development for Eververse.ai
1
12
1
Storyline Health Patient Dashboard Development
1
3
1
AI-Powered Marketing Explainer Video for Meilisearch AI
1
5
JavaScript
(1)
Follow
Message
Logan Maddox
Washington, USA
Framer & Graphic Designer | Web Designer | UI/UX Designer
New to Contra
Follow
Message
Framer & Graphic Designer | Web Designer | UI/UX Designer
1
ClickOut - Fashion Ecommerce Website ClickOut is a modern and stylish e-commerce platform designed to revolutionize the online fashion shopping experience. With a user-friendly interface, seamless navigation, and a curated selection of trendy apparel and accessories, ClickOut aims to be the go-to destination for fashion enthusiasts. The platform will provide a smooth and engaging shopping experience with advanced features like AI-driven recommendations, virtual try-on, and personalized user dashboards.
1
76
0
Masterful Based on a Framer template, develop a website for the project masterful.info masterful.info (http://masterful.info)
0
68
2
TechFly Graphic Designs "Techfly: A captivating text logo that exudes innovation and sophistication. With sleek typography and subtle design elements, it encapsulates the essence of technology and modernity. Elevate your brand with Techfly."
1
2
70
3
Logo Design I specialize in creating clean, memorable logos that help brands stand out and communicate their identity with confidence. My approach combines strategic thinking with thoughtful design to deliver logos that are timeless, versatile, and tailored to each client’s vision. I design logos that tell your brand’s story. With a focus on clarity, creativity, and detail, I help businesses bring their ideas to life through distinctive, meaningful design that leaves a lasting impression.
3
74
JavaScript
(2)
Follow
Message
Brian Pyatt
pro
Ashburn, USA
Senior AI Solutions Architect | Agentic Systems & Product
New to Contra
Follow
Message
Senior AI Solutions Architect | Agentic Systems & Product
0
GitHub - spotcircuit/rebar: Rebar — structural memory framework…
0
3
0
Velocity Electric : AI Voice Agent + CRM Integration
0
0
0
Textpro.ai — AI Universal Concierge Platform
0
2
0
A lot of people building with Claude Code-style agents are still focused on prompt engineering. I get why. It’s the most visible lever. But I think the bigger opportunity is usually somewhere else: the reusable skills, workflows, or slash commands an agent relies on over and over again. Those are what shape behavior over time. And in my experience, improving them is less about piling on new instructions and more about tightening the loop around failure. Watch where the agent breaks. Figure out why. Fix the workflow. Repeat. Sometimes that means adding a rule. Just as often, it means removing one. Over the last couple of days, I rebuilt the /close-loop cycle in my framework, Rebar, and it clarified something I’ve been feeling for a while: A lot of agent systems have evaluation. Fewer have a feedback loop that actually makes them simpler, cheaper, and more reliable over time. That difference matters. In the old version of my loop, a feature could be marked complete because the evaluator returned a PASS. The orchestrator would close the issue, everything looked fine, and only later would I realize something important was still missing — like a Prisma migration file. So the feature wasn’t really done. It just had the appearance of being done. The evaluator had often already pointed at the problem in its follow-up notes. But the system wasn’t treating that kind of language as blocking. “PASS with follow-ups” was getting interpreted too generously. That was the real failure: not bad evaluation, but a weak handoff between evaluation and release. So I rebuilt the loop around four gates, and all four have to pass before “done” means anything: 1 . Evaluator Checks code, scope, and completeness and writes structured findings. 2. Release gate Scans those findings for blocking language like “must generate,” “cannot ship,” or “before any live DB.” If that language shows up, the work is blocked. 3. Cycle-scoped improve step Promotes only the current cycle’s validated observations into the expertise file, instead of dragging in stale backlog noise. 4. Meta-improve Looks across evaluator logs for repeated failure patterns and proposes changes to the templates themselves, with a human review step before anything sensitive gets updated. That last piece is where the compounding effect starts to show up. Because the default instinct in agent systems is usually to add. Add another reminder. Add another caveat. Add another paragraph to the template so the model doesn’t make that mistake again. Sometimes that’s right. But it’s also how workflows slowly turn into bloated instruction stacks that cost more and work worse. Every extra line gets paid for on every future run. And long prompts full of overlapping rules are often harder for models to follow consistently than a smaller number of clear ones. So the better question is not “what else should we add?” It’s “what actually belongs in the workflow?” In the first real cycle of the rebuilt loop, I saw four patterns: - schema changes without Prisma migrations - dirty working tree bleeding across features - orphan Vue refs that were declared but never rendered - Hono context typing debt across multiple routes Only the first two justified workflow changes. The orphan refs were already being caught by the evaluator, so there was no reason to duplicate that logic in the template. The Hono typing issue was real, but it was cleanup work, not a process problem. That distinction matters more than it sounds. If every bug becomes a workflow rule, the system gets heavier every week. If you’re disciplined about separating repeatable process failures from one-off implementation issues, the workflow stays lean. And that’s really the bigger point here. There are two things improving at the same time: First, context gets better. Validated observations get promoted into structured expertise, so the next run starts with better knowledge of the codebase and less repeated discovery. Second, workflow gets sharper. The system looks at repeated failures and changes the reusable commands around the agent — ideally by adding only what consistently matters and cutting what doesn’t. That combination is where the gains compound. The agent starts with better context, but a lighter operating model. That’s a much healthier direction than what a lot of systems drift toward, which is more and more prompt text, more accumulated edge-case handling, and rising cost without much improvement in reliability. The artifact trail is what makes this workable. Each cycle leaves behind evidence: evaluator logs, raw findings, expertise updates, queued template patches, wiki notes. After enough cycles, you’re not just reacting to the last annoying failure. You can actually see what keeps recurring, what was already covered elsewhere, and which instructions are no longer doing useful work. That makes subtraction much easier to justify. And yes, there’s a token-cost argument here too. A 2,000-token template invoked 50 times a day costs 100,000 tokens a day just to load. Trim 500 tokens of dead guardrails and the savings add up quickly. But the bigger win is clarity. In practice, models usually do better with fewer, more coherent rules than with long prompts full of defensive clutter. So shortening the workflow isn’t just cheaper. It often improves quality too. To me, this is the more interesting layer of agent design: not just agentic coding, but skill engineering. The reusable commands around an agent should themselves be under active improvement. Not based on vibes. Not based on one weird miss. Based on repeated observation and actual evidence. If your setup doesn’t have: - an evaluator producing structured findings - a release gate that can interpret blockers - a way to detect recurring failure patterns - and a human review step for sensitive workflow changes then there’s a good chance the system will get more expensive over time, not less. Every miss turns into another sentence. Every edge case turns into another rule. Eventually you’re feeding the model more instructions and getting less leverage out of them. The better path is a tighter loop: less prompt where possible, more signal where it matters, and workflows that get sharper as the system learns. That’s what I’m trying to build into Rebar. Rebar is open-source. The close-loop command, the meta-improve queue, and the release gate are in the repo. Play with it, and if you see a dead instruction in my own templates, send me a pull request.
0
118
JavaScript
(1)
Follow
Message
JONES LATRELL
Ashburn, USA
Software Engineer and data annotation and labelling.
Follow
Message
Software Engineer and data annotation and labelling.
0
Clinical Management System
0
23
0
Patient-Doctor Appointment System Development
0
3
0
GitHub - brianogaro/customer_orders_project
0
6
View more →
JavaScript
(2)
Follow
Message
meley haddis
pro
Washington, USA
Fullstack engineer with dynamic design expertise.
Follow
Message
Fullstack engineer with dynamic design expertise.
0
My Crypto Haven
0
4
0
Personal Software Portfolio
0
4
0
lolTeams
0
10
0
GitCentral
0
5
JavaScript
(2)
Follow
Message
Explore people